← back to essays/

An Analytics Renaissance is Coming

2025-09-29

There’s something I keep noticing in conversations with data leaders: almost everyone is talking about AI, and almost nobody is talking about whether their analytics are ready for it.

I think that’s backwards. The AI part is getting easier every quarter. The hard part, the part that will actually separate winners from losers, is the boring foundational work of making your data legible to machines. If you can measure something clearly, AI can do remarkable things with it. We see this in Go, in chess, in any domain where the objectives are well-defined and the feedback is fast. Business is messier, obviously. But the principle holds: clarity is the unlock.

The Hidden Bottleneck

For years, companies have spent millions on data lakes and dashboards, and now they’re assuming this gives them a head start in the AI race. Most of it doesn’t. That data was built to run applications or to help humans slice and dice reports. It was never designed for machines to reason with. This is a big part of why so many AI initiatives stall out: the foundation wasn’t built for what people are now trying to do with it.

Most analytics teams are still stuck in a reactive loop. A stakeholder asks a question, an analyst pulls data, the answer goes into a deck. It’s slow, it’s brittle, and it scales only as fast as you can hire smart people.

But that’s starting to change. A new kind of analytics stack is emerging, one where dashboards are starting points rather than deliverables, where AI agents can diagnose issues, generate hypotheses, and propose interventions. Over time, they will do more than propose. They will act.

The Unchanging Foundation: Why Core Analytics Still Matters

Even as AI gets more capable, the fundamentals of good analytics are timeless. You still need to measure the right things, get the numbers right, understand how your metrics connect to each other, and know which levers actually cause change versus which ones are just noise. None of that has changed.

What has changed is how demanding the consumer of that data is becoming. When a human reads a dashboard, they bring context. They know the business, they know what’s weird, they can paper over gaps in the data with judgment. Machines can’t do that. To power intelligent agents, your data needs its business context encoded explicitly. It needs to be navigable so an agent can connect dots across domains. And increasingly, it needs to be real-time, because an insight about an abandoned cart is useless five minutes later.

This is where the real investment has to happen. Comprehensive metric trees. Guardrail metrics (the things you absolutely cannot break). A registry of your core user segments. Think of it as building a structured map of your business logic that a machine can read as fluently as your best analyst can. In the past, analysts traced these connections by hand. Now, you can build a system where AI agents move inside that logic because you’ve encoded the rules of the game.

Rewiring the Loop: An Example

Let me make this concrete. Imagine you’re on the data team at a fast-growing consumer tech company. You have dashboards and talented analysts, but insights take too long, and teams are asking you to "tell us what to do" instead of "what happened." You’ve heard this before.

Now imagine retention for a key Gen Z cohort suddenly starts dropping.

In most companies, the response is painfully manual. Someone notices the dip in a weekly dashboard, days late. Analysts scramble to segment the data. The team debates causes. An experiment might get designed weeks later. The whole cycle depends on heroic individual effort, and it’s different every time.

But picture a version of this where the infrastructure was built with machines in mind from the start. The anomaly detection fires immediately because your metric tree is instrumented. The system isolates the affected segment (Gen Z, iOS, TikTok-acquired) because your segmentation registry is codified and machine-readable. The funnel analysis pinpoints a drop in quiz completion during onboarding because the instrumentation is clean and canonical. Past experiment results are searchable, so the system can suggest interventions that have worked before. An experiment gets scoped using templates, monitored against pre-set thresholds, and when it hits significance, the results feed back into the system automatically.

None of the individual pieces here are new. Anomaly detection, segmentation, funnel analysis, experiment platforms. All of these exist today. What’s new is the possibility of wiring them together into one continuous loop where AI handles more and more of the execution. The principle is straightforward: the more of the process you instrument, and the more context you encode for machines, the more they can own.

The Mindset Shift

The best companies today already have strong analytics teams running tight loops of analysis, decision, and action. But even that model has a ceiling. It scales only as fast as you can hire and train brilliant people, and it depends on those people being consistently excellent.

The next step is encoding enough of the business logic that AI can participate in that loop directly. You’re still setting direction. You’re still making the hard calls. But the system handles more of the routine work, faster and more consistently than any team could manually.

I think this is actually good news for companies that are still early in their analytics maturity. If you haven’t built a legacy stack full of technical debt, you can design for this from the beginning. Starting behind can be an advantage if you’re building the right foundation.

Where Humans Still Matter

I want to be clear about what I'm not saying. I don't think AI replaces analysts. The monitoring, the anomaly detection, the first pass at diagnosis, the drafting of experiment docs: that work gets automated, and good riddance. Most analysts didn't get into this field to spend their days pulling numbers for slide decks.

What stays human is the judgment layer. Deciding which objectives matter, interpreting ambiguous signals where the data points in multiple directions, designing strategies from first principles, navigating the politics of competing metrics when two teams want different things. That stuff is hard, it requires organizational context that no model has, and it's where the best analysts already want to spend their time. The promise here is that AI handles the grunt work so people can focus on the parts that actually require thinking.

From the Sidelines to the Frontier

For the past few years, I was in venture capital. It’s a front-row seat to innovation, sort of. You meet brilliant founders and spend a lot of time thinking about the future. But you don’t build it yourself. Venture, for me, ended up being too much a business of filtering ideas and persuading others to believe in far-fetched stories.

Insight doesn’t come from a thousand slide decks. It comes from a thousand decisions. The feedback loops in venture are long and noisy. You often don’t know if you were right for years. And somehow, 75% of the people you talk to think they’re going to be top quartile performers. (I’m pretty sure that math doesn’t work.)

Operating is different. The results of your choices show up in hours. You build systems, ship products, watch what moves, and when things don’t move, you fix them. The real frontier is inside companies, in the chaos of actual decisions. That’s where I want to be.

The competitive landscape is being redrawn. For the last decade, the winners were the companies best at manually iterating through the insight-to-action loop. The next decade will favor the ones that build systems to amplify that process at scale. If you're a data leader, I think this is a genuinely exciting moment. The work has always mattered. The difference now is that the leverage on getting it right is about to increase dramatically.