
(...or, the day a tech stock's lunch got eaten by a ghost from the future)
We just watched a live case study in how moats (don’t?) work— and it unfolded between lunch and dinner.
Duolingo dropped a blowout quarter: revenue surging, EPS crushing estimates, user growth on fire, and a very public “AI makes us faster” narrative. The stock rocketed. Hours later, a new frontier model demo from OpenAI showcased how a polished language-learning app could be scaffolded from a prompt in minutes. The crowd went from “unassailable moat!” to “moat?!” so fast you could hear the collective neck crack.

Last week offered the cleanest live demo of that idea I’ve seen. A company posts a monster quarter: revenue up more than 40%, EPS miles ahead of estimates, user growth ripping, guidance raised. Investors cheer. The stock rips. Hours later, a frontier model takes the stage and casually scaffolds a polished web app from a prompt “vibe coding” as the Wall Street Journal framed it, showing how fast credible software can be spun up now. Cheers turn into questions. “What, exactly, is defensible when the baseline build speed is minutes?”
You could watch the tape process that question in real time. Shares that were up 25–35% on the day surrendered a big chunk of the move once the demo circulated, then bled a bit more the next session; not because the business suddenly deteriorated, but because the definition of its moat shifted in investors’ heads between lunch and dinner. That’s the part worth paying attention to.
The old shorthand for moats: “we have the feature, the algorithm, the slick UX” is fading. When a general model can rough in most of a product on command, feature parity is cheap and time-to-demo approaches zero. That doesn’t mean moats vanish. They migrate. From artifacts to systems. From “what we built” to how fast we learn. From walls to waterwheels.
If you want a practical definition: a modern moat is the compounding interaction between your distribution, your outcome-labeled data, and your pace of improvement. Everything else: branding, patents, even raw model access, matters, but those three decide whether you’re gliding or getting commoditized.
A thought experiment: here’s how that compounding might look for a language learning app:
- Distribution / default: Be the preset. Homescreen slot, LMS tie-in, SSO, school deals. Demos can’t teleport an installed base.
- Outcome-labeled data: Clicks ≠ learning. Track who improved, renewed, referred and feed that back into training. Secret sauce > pageviews.
- Pace of improvement: Ship speed is the moat. More experiments, faster rollbacks. Not “best feature now,” but “who upgrades reality most.”
- Community gravity: Streaks, cohorts, creators, teachers. Social glue doesn’t clone on command.
- Progress-anchored switching costs: Saved progress, certs, reimbursements, rosters, family plans. Boring by design, sticky by nature.
- Partnerships & channels: Accreditation, districts, HR, OEM/carrier bundles. Paperwork → armor.
- Regulatory & safety ops: Privacy, localization, accessibility, integrity. The unsexy stack that scares fast followers.
- Cost curves: Distillation, caching, retrieval, on-device. Same experience at half the unit cost = their growth funds your margin.
- Human networks: Tutors, mods, SMEs, creators. AI boosts them; it doesn’t replicate coordination at scale.
Notice what’s missing?
“We have X feature first!”
In 2025, that’s table stakes.
And that’s why the market’s whiplash made sense. The earnings pop reflected fundamentals: revenue up ~41% to ~$252M, EPS nearly double expectations, DAUs ~47.7M, subscribers ~10.9M, full-year revenue nudged to roughly $1.01B. The air pocket that followed wasn’t about those numbers; it was the crowd rapidly repricing defensibility after watching a model spin up an app in public. Both things can be true in the same afternoon.

If you build products, the takeaway is refreshingly actionable:
— Instrument outcomes, not just taps. Train on learning, retention, renewal, referral... signals that move your unit economics.
— Centralize adaptation. One pipeline for prompts, evals, guardrails, and A/Bs. Swap models without re-plumbing your brain.
— Be promiscuous at inference, monogamous with data. Multi-model routing and on-device fallbacks are flexibility; clean data contracts are power.
— Buy distribution like oxygen. Bundles, storefront placement, SSO, curriculum slots. Defaults win trials.
— Design graceful degradation. Assume your flashiest feature is a commodity in six months; make sure progress, community, support, and price carry the day.
— Tell the system story. Investors, partners, and recruits should see your learning loop (experiment velocity, eval quality), not just your screenshots.
If you allocate capital, the playbook is similar. Separate momentum from moat. A frontier demo reprices the future; a clean quarter prices the present. Grade companies on distribution, data quality, learning rate, and whether AI is bending their cost curve down or just inflating their demo reel. After last week’s fireworks, several shops reiterated bullish stances precisely because those system-level advantages still looked intact, even if the tape was spooked by how fast a clone can be drafted.
None of this is a doom narrative. It’s a design constraint. The moat didn’t disappear; it moved. If you’re still pouring stone for a thicker wall, you’ll get surprised by the next demo. If you’re installing pumps, distribution, data, iteration, you’ll find that every scary model release makes your flywheel spin faster.
So yes, celebrate the great quarter. Just remember to service the waterwheel. The crocodiles moved out of the moat a while ago. Install pumps.