The big picture ✨
Margaret-Anne Storey recently published a paper that's been circulating in engineering circles, and it's worth your attention. The short version: as AI writes more of your code, the risks that matter most are no longer in the code itself. They've moved somewhere harder to see, harder to measure, and much harder to fix.
The paper introduces a Triple Debt Model. Three kinds of debt that now determine whether a software team is actually healthy, or just looks healthy. You probably know one of them. The other two are more interesting.
The Three Debts 💸

Technical debt is the one everyone knows. It's annoying, it slows you down, but it's visible and you can find it, schedule it, and fix it over time.
Cognitive debt, also known as comprehension debt, is the erosion of shared understanding across a team. The term has been mentioned across multiple research studies, and was recently popularized by Addy Osmani, Director at Google Cloud AI. It's what happens when engineers stop knowing how the system they're responsible for actually works. AI writes clean, well-formatted code at a pace no team can fully review. The code looks fine. The tests pass. But ask anyone on the team to walk you through a critical module and you'll get vague answers. That uncertainty is cognitive debt accumulating.
Intent debt is the missing rationale nobody wrote down. Not just what the code does, but why it does it that way. What alternatives were considered. What constraints shaped the decision. What the system is actually trying to accomplish. Every time that context stays in someone's head instead of getting captured in a document, an ADR, a spec, that's intent debt.
Here's the thing: while AI can help reduce technical debt, it actively accelerates the other two. And unlike technical debt, these two are not as easy to detect.
How They Feed Each Other 🔁
These three debts don't accumulate independently, they compound.
When intent isn't documented, new team members can't form accurate mental models of the system. That's cognitive debt caused by intent debt. When engineers don't understand the system, they can't document the decisions and reasoning future developers will need. That's intent debt caused by cognitive debt. When engineers are confused about what the system is doing, they make poor implementation decisions. That's technical debt caused by cognitive debt. And messier code is harder to reason about, which erodes understanding further. More cognitive debt on top of more technical debt.
It's a feedback loop. Each type makes the others worse. And AI-assisted development turns up the speed on all of it simultaneously.
What It Actually Looks Like 🫣
In her research, Margaret-Anne Storey points out how a group of student developers hit a wall around week eight. Simple changes started breaking unexpected things. The code was fine, tests were passing, everything looked clean. The real problem was that no one on the team could explain why design decisions had been made, or how different parts were supposed to work together. The shared mental model had evaporated.
The warning signs to watch for:
Engineers avoid touching certain parts of the codebase because nobody's quite sure what might break
Tribal knowledge concentrates in one or two people
Onboarding takes longer than it should, even when documentation exists
Post-incident reviews lack clarity
Simple changes keep producing unexpected results
None of these look like engineering metrics problems. They feel like communication / process problems. And this isn't just an issue students might face. It's exactly what could happen at scale when any organization goes deep on AI without thinking about what gets lost along the way.
The Implications for Your Team 🤔
Speed becomes a liability. The whole promise of AI-assisted development is that you ship faster. And you do, until you don't. Teams that have been deep into AI-assisted development for 12–18 months report that velocity starts quietly reversing. Debugging sessions stretch. Simple changes take longer. The codebase grew fast, but comprehension didn't keep up, and now every change carries a risk that’s hard to measure.
Incidents get harder to explain. When cognitive debt is high, post-incident reviews become exercises in uncertainty. Nobody knows exactly why the system behaved the way it did, because nobody fully understood the system in the first place. You can fix the symptom, but it’s harder to prevent the root cause when you don't understand the architecture.
AI agents make intent debt critical. This is the part most teams haven't fully reckoned with yet. When you bring in AI agents to help build and maintain your codebase, they need to understand what the system is for, not just what it currently does. Without captured goals, constraints, and design rationale, agents may confidently optimize for the wrong objective. Refactor something that looks inefficient but exists for a reason nobody recorded. Generate code that satisfies the prompt while violating an architectural principle that was never written down. The less intent is documented, the less useful AI agents become over time.
What Companies Can Actually Do 💡
1. Write the intent before you write the prompt. Before any AI generation, the engineer writes a spec: what this is for, what constraints apply, what it shouldn't do. That spec becomes what the PR gets reviewed against, and it stays in the codebase as the rationale future developers and agents need. Some teams have made this a mandatory PR field: if AI was used, the engineer fills in what was generated, what approach was taken, and what edge cases were tested. The PR can't merge without it. It forces actual understanding before approval, and it creates an intent trail that survives the person who wrote it.
2. Run a comprehension review as a sprint ritual. Once per sprint, rotating pairs go back through the AI-heavy PRs from the previous sprint. Not to re-approve them, but to actually understand them. They add "why" comments, and anything nobody can explain gets flagged for a rewrite or a deeper walkthrough. This is deliberately outside the critical path so it doesn't block shipping. It takes a few hours per sprint, and the teams doing it consistently report it's where the most useful architectural conversations happen.
3. Relate module ownership to comprehension. In AI-generated codebases, nobody wrote anything, so by default nobody really owns anything. Someone just responds to the page. Some teams are now assigning a named human owner to every significant module, with a simple expectation: that person can explain it to anyone on the team at any time. Not "they wrote it," not "they're on call for it”, they understand it. That personal expectation is the comprehension gate, and it works because it's harder to delegate than a process checklist.
The Bottom Line
The teams that are going to win with AI aren't the ones that generated the most code. They're the ones that stayed in control of what they built. That maintained genuine understanding of their systems even as the volume of AI-generated code went up.
The research shows that it requires deliberate practices, structural changes to how code gets reviewed and documented, and a willingness to treat comprehension as a deliverable rather than an assumption.
Technical debt you can refactor. Cognitive and intent debt are harder. The teams thinking about this now are the ones who won't be caught off guard later.
The fastest-growing repo on GitHub is a one person team!
OpenClaw went from 9K to 185K GitHub stars in 60 days — the fastest-growing repo in history.
Their docs? One person, plus Claude. They scaled to the top 1% of all Mintlify sites, shipping 24 documentation updates a day.
Till next time,



