Thinking

Insights.

Practical thinking on data, AI, and product engineering from our practitioners. No SEO padding. No thought-leadership cosplay. Just what we know and when we learned it.

productOps insights and analysis
Recent articles
Team discussion and strategic planning
Enterprise AI7 min read

Navigating AI: What Two Years of Hype Actually Taught Us

We now have enough track record to separate what is working from what is salesmanship. The capability is real. So is the discipline required to capture it.

Bob Cagle
Bob CagleApril 16, 2026

Every leader has heard the same pitch for the past two years. AI will transform your business. The models are ready. Your competitors are moving. You need to act now.

Some of that is true. Some of it is salesmanship. And now, in 2026, we have enough track record to tell the difference.

The technology is genuinely better than the last hype cycle

This is the part people in our profession sometimes resist saying. We have watched technology hype cycles since the early days of ERP, through the first web bubble, through big data, through blockchain. The pattern is always the same: real technology, legitimate use cases, followed by a gold rush that outpaces genuine readiness.

What is different about the current moment is that the underlying capability is substantial. Reasoning models like GPT-4o, Claude 3.7 Sonnet, and Gemini 2.0 can handle tasks that would have been genuinely impossible three years ago. Code generation, document analysis, structured data extraction from unstructured inputs: these work, often at production quality, for a real set of problems.

The hype is real. The capability is also real. That combination is rarer than it sounds.

The failure modes are also real

For every genuine win, we have watched expensive failures. And the failures follow a consistent pattern.

They begin with a proof of concept that works. Someone builds a demo in two weeks. Leadership is excited. A vendor makes a compelling proposal. Resources are committed. Then the system hits production data. And production data is nothing like demo data.

A customer service agent that handled 200 canned questions beautifully falls apart on the 201st, which turns out to be a refund request for a product that has three different SKUs, two of which were discontinued, and one of which has a supplier dispute attached to it. An internal knowledge assistant that worked perfectly on a curated document set starts confidently hallucinating once connected to the full document store, because that store contains contradictory policy documents from 2019, 2021, and 2024, and no one ever resolved the contradictions.

The technology did not fail. The data preparation failed. The governance failed. The problem scoping failed.

Agents are a genuinely new risk category

For two years, the primary risk was a bad answer you could ignore. A chatbot that got something wrong. A summary that missed a nuance. Annoying, but containable. A person reads the output, notices the problem, and corrects it.

Agentic AI removes the person from that loop. An agent does not just answer questions. It takes actions: it sends emails, updates records, places orders, schedules meetings, makes decisions at machine speed. When the data it is working with is incomplete, stale, or contradictory, the agent does not pause and ask for clarification. It acts on what it has. It may complete 400 tasks before anyone notices that the underlying assumption driving those tasks was wrong.

We have seen this in production. A document routing agent that sent 180 contracts to the wrong review queue because a department rename had not been propagated through the metadata. A scheduling agent that booked 60 appointments using availability data cached from the previous day. The capability was working exactly as designed. The data was not.

What the disciplined organizations are doing differently

The organizations getting real value from AI in 2026 share a few characteristics that have nothing to do with the models they chose. They defined the problem before they selected the technology. They built their data foundation before they scaled the application. They started with narrow, high-confidence use cases and earned the right to expand scope.

They also treated AI like a new employee: supervised, with limited initial access, earning trust through demonstrated reliability before taking on more responsibility.

The opportunity is real. So is the discipline required to capture it. Those two things have always been true of every technology cycle worth being a part of.

Data analytics on screen, team working
Data Strategy8 min read

It Still Starts With Data. More Than Ever.

The argument has not changed since December 2024: garbage in, garbage out. What changed is the blast radius. Agents act on bad data at machine speed, at scale, before anyone notices.

Greg Dolder
Greg DolderApril 10, 2026

I wrote a version of this article on December 31, 2024. The core argument was simple: AI quality is a direct function of data quality. Garbage in, garbage out. It had always been true. It was becoming more important to say out loud.

Fifteen months later, I find myself writing an update. Not because the argument changed. Because the stakes did.

What changed

In late 2024, the primary risk of poor data quality in an AI context was a bad output. A chatbot that gave a wrong answer. A recommendation that missed the mark. A summary that left out a key detail. Those are annoying. They erode trust. They create rework. But they are contained. A person reads the output, notices the problem, and corrects it.

Agentic AI removes the person from that loop. In 2026, more and more organizations are giving AI systems the ability to take action: book appointments, route documents, place orders, update records, send communications. The agent reads its inputs, makes a decision, and executes at machine speed, at scale.

When those inputs are clean, the results are genuinely impressive. When they are not, you do not get a bad report on a dashboard. You get 400 scheduling conflicts, 200 misdirected emails, or a record deletion cascade before anyone knows something went wrong.

The failure modes have not changed. The consequences have.

Every data quality problem we identified in 2024 still exists: inconsistent formatting, missing values, stale records, duplicates, contradictory entries, unreliable source data. What changed is how those problems propagate.

A stale customer record in a CRM was once a problem for a salesperson who called the wrong number. Today that same record, feeding an agentic outreach system, generates a sequence of automated follow-ups to a contact who left the company eighteen months ago, before your system figures out something is wrong. A product ID mismatch in an inventory system once produced an inaccurate report someone had to manually reconcile. Today that same mismatch, feeding a procurement agent, places a replenishment order for the wrong SKU. Three times. Before the weekend.

The underlying data quality issues are identical. The blast radius is not.

What good looks like in 2026

Organizations getting this right share a few practices. They have data contracts: explicit, versioned agreements about what a data source will contain, what format it will be in, and how fresh it will be. When a source deviates from its contract, the agent stops and escalates rather than proceeding.

They track lineage. Every piece of data an agent acts on can be traced to its source and its last verified update. They enforce freshness SLAs: if a data source has not been updated within the expected window, it is treated as unreliable until proven otherwise. They build a governed access layer between the agent and the data stores it operates on. Not every table. Not every field. The agent sees a curated, validated, permissioned view.

None of these are new ideas. They are data governance practices that have been on the checklist for a decade. What changed is the cost of skipping them.

A word for leaders

Clean data has historically been treated as an IT problem. A hygiene issue. Something to get to eventually, after the important work is done.

It is now an operational risk management problem. If you are deploying AI agents, or planning to, and you do not have confidence in the quality of the data those agents will act on, that is not a technical gap. It is a governance gap that belongs on your risk register next to cybersecurity and business continuity.

The path forward is not complicated. It requires discipline, not heroics. Start with the data that feeds your highest-stakes decisions. Define what clean means. Instrument it. Govern it. Then give your agents access to it.

It still starts with data. More than it ever has.

Quarterly letter

One email per quarter. Signal, not noise.

We send a short letter every quarter with what we are seeing in the market, what is working, and what we think is worth paying attention to. No sales cadence. Unsubscribe any time.

Want to apply this thinking to your organization?

Get in touch →