Where the Value Will Be
I keep coming back to a question that I think most people in tech are going to have to wrestle with soon: if AI gets good enough to do most knowledge work, where does value actually live? The AI itself is commoditizing fast. Every major lab is converging on similar capabilities, costs are falling, and open-source models are closing the gap. The model layer is becoming a utility. So if the execution engine is cheap, what's expensive?
I've been thinking about this a lot, and I keep landing on three things.
1. The Ability to Judge
The most underappreciated asymmetry in AI right now is between production and verification. We're getting extraordinarily good at generating output (text, code, analysis, images, decisions) but our ability to verify that output is barely moving. It's still bounded by human cognition, institutional knowledge, and the slow accumulation of domain expertise.
This gap is going to widen. As models get more capable and autonomous, they'll produce more than any team can review. The bottleneck will increasingly be whether anyone can tell if the answer is right.
And this is where I think the real moat will be. The data you use to train models is table stakes at this point. The defensible asset is the data and institutional memory you use to grade them. A bank with thirty years of lending decisions, defaults, and edge cases can verify an AI underwriting decision in ways a startup simply cannot. A hospital system with decades of diagnostic outcomes can audit an AI's clinical recommendation against a track record that doesn't exist anywhere else.
The organizations that have been quietly accumulating operational history, the full record of what happened, what worked, and what failed, are sitting on something far more valuable than they realize. When AI can produce anything, the ability to judge what's good becomes the scarce resource.
2. Proof of Origin
As AI-generated content becomes indistinguishable from human work, a second question starts to matter a lot: how do you know who, or what, made something?
This sounds philosophical but it's intensely practical. Think about contracts, journalism, legal evidence, financial disclosures, medical records, academic research. In all of these domains, the provenance of information matters as much as the information itself. A financial model means something very different if it was built by a senior analyst than if it was auto-generated by an AI agent that hallucinated a few inputs.
Right now, we have almost no infrastructure for this. There's no reliable way to build a paper trail for digital content, to prove that a specific human (or a specific model, with specific inputs) produced a specific output, and that it hasn't been tampered with since. We're going to need tamper-proof records: systems that track authorship, inputs, and every transformation along the way. The blockchain crowd has been building this kind of infrastructure for years, largely in search of a problem. AI might be the problem that actually justifies it.
I think this becomes essential plumbing. Boring, invisible, deeply necessary. And the companies that build it well will be embedded so deeply into the economy that they'll be very hard to displace.
3. Willingness to Guarantee
The third piece is the most interesting to me, probably because of my time in finance. Right now, most AI products are sold as tools. You buy access, you use the tool, and if something goes wrong, that's your problem. The vendor provides capability. You absorb the risk.
I think this is going to flip. As AI moves from "assistant" to "autonomous agent," making decisions, executing transactions, writing code that ships to production, someone is going to have to guarantee the output. Technically, legally, and financially. Someone has to be on the hook.
Imagine how this plays out. Today you sell a subscription: here's the tool, good luck. Tomorrow you sell results: pay us per task completed, we'll handle the work. Eventually, someone is going to sell a warranty. The AI does the work and the provider stands behind it financially. If the AI is wrong, the provider eats the loss. That's a fundamentally different business.
Anyone who has read Nassim Taleb will recognize the principle. Skin in the game. The companies willing to put real money behind the claim that their system works will command enormous premiums. Think about hiring a contractor. The one who offers a warranty on the work is telling you something very different than the one who doesn't. Over time, I think that warranty becomes the whole product.
And the companies that can actually do this will be the ones with the deepest verification infrastructure (see point #1) and the strongest provenance systems (see point #2). It all connects.
The Structure That Emerges
Put these together and a picture forms. I think AI-heavy organizations will naturally settle into a kind of layered structure. Humans at the top decide what to do and why. AI handles the bulk of the execution in the middle. And humans at the bottom check the work, sign off on it, and stand behind it. The top and bottom layers are where compensation and pricing power concentrate. The middle is where costs collapse.
What worries me is the bottom layer. Checking AI's work requires deep domain expertise, and that expertise requires years of hands-on reps. If AI automates away the junior roles that train the next generation of experts, we could end up short on the very people we need to keep the machines honest. That kind of problem compounds quietly and gets harder to see until it's too late.
But if we get it right, if we invest in verification infrastructure alongside AI capability, if we build proof-of-origin systems into the stack, and if the business model evolves toward guaranteeing outcomes, we'll have something genuinely new. Trustworthy production at scale. And trust, as always, is where the real value is.