

Imagine a new senior analyst joining your company. They are smart. They have seen a lot of different data stacks. They know SQL, modern BI tools, and even your industry. On day one, they still know almost nothing about your business. You do not measure their success after a 2 hour orientation and a couple of docs. You expect a ramp up: first week, first month, first quarter. You give them access to your tools, let them shadow people, and you watch how quickly they start making good decisions with your data. That is exactly how you should think about rolling out an AI-based technology like Solid. Solid is a piece of software, but it behaves much more like a new team member than like a static SaaS feature. It has to learn your environment. It will be wrong sometimes. It will get better. And the way you structure that learning journey is what makes the POC and rollout work or fail. In this post I want to talk about that journey. How Solid learns your world When we plug Solid into an organization, it does not start from a blank screen. It starts reading. A lot. Solid automatically pulls in things like:
If you read Your Data’s Diary: Why AI Needs to Read It, you have seen this idea before. Your data stack is already full of tiny breadcrumbs that explain how the business actually thinks. Solid treats those as training material.From that, the platform can usually get to something like 70 percent understanding on its own:
That initial pass is fast. It is also imperfect. The last 30 percent is where the real journey starts. The 70 percent rule (and why it is a feature, not a bug) Traditional SaaS trained us to think in binaries:
With learning software like Solid, that mental model breaks. If you wait for “100 percent accuracy” before you let anyone touch the system, you will never ship. At the same time, if you treat every wrong answer as a catastrophic bug, your team will lose trust before the AI even gets a chance to learn. In The Ghost in the Machine: How Solid drastically accelerates semantic model generation, we showed how much of the semantic model can be auto generated if you give the AI enough signals. But even there, the goal is not perfection out of the box. The goal is to shortcut the boring 70 percent so that humans can focus on the nuanced 30 percent. That 30 percent is made of things like:
No model can guess these reliably without you. Which brings us to the part most teams underestimate. The customer is part of the product loop now In the old SaaS world, the vendor built features, QA tested them, then threw them over the fence. You, as the customer, mostly validated integration and performance. With Solid, the customer is inside the learning loop. That is not a nice-to-have; it is the design. A typical journey looks something like this:
The key shift is this: your organization is now a co-teacher of the AI. If you treat feedback as “extra work,” you will resent the process. If you treat it as “we are teaching a very fast learner who will keep that knowledge forever and share it with everyone”, the investment makes sense. Redefining what “POC success” means Most POCs still look like classic software projects: a few weeks of integration, a checklist of features, and a final demo. For learning software, that is not enough. You are not just checking “does it run.” You are checking: