StackUnderflow: Are AI tools changing where and how developers learn?
The question worth asking about AI and developers
Stack Overflow traffic has fallen sharply since ChatGPT launched. Drupal.org issue queue activity has thinned. AI coding assistants now write a meaningful share of production code, and they do it by drawing on the community archives that are going quiet. The question we keep asking ourselves is not whether AI will replace developers. It is whether the next generation of developers will know what this generation learned by working through a public answer thread at 11pm.
That is the shape of the problem, and it is specific to the people we hire, train, and mentor. It is also specific to the platforms we build on. Drupal is a prime example.
What the research says about AI and developer productivity
Abdallah and co-authors analyzed 58 million Stack Overflow posts at the ChatGPT inflection point (PMC/ Nature Group, 2025). Post volume declined measurably after late 2022, and the decline is steepest in the beginner-question categories that historically taught the most. The community archive is quieter overall, and the conversations that taught the most foundational material are the ones thinning fastest.
DORA's 2025 State of DevOps report tracks a related signal. Respondents reported faster individual throughput with AI assistants, and slightly worse delivery stability. Throughput up, quality flat-to-down. That gap is where the hidden cost of unexamined AI adoption lives.
The single strongest counterexample to "AI makes every developer faster" comes from METR, which ran a randomized controlled trial in mid-2025 on experienced open-source contributors. Developers believed the tools made them 24% faster. Measured output was 19% slower. The sample is narrow (16 developers, real repositories, large codebases), and the authors are careful about generalizing. The gap between perceived speedup and measured slowdown is the uncomfortable data point we keep returning to.
The feedback loop that trained both AI and developers
Every large model used for coding was trained on a corpus that included Stack Overflow, the Drupal issue queue, GitHub READMEs, code review comments, and the change records and release notes from the major frameworks. That training corpus exists because a generation of developers shared their work publicly. They wrote bad first attempts. Other developers corrected them, sometimes gently, sometimes not. The archive preserved the correction, not only the answer.
AI can answer your question because that loop ran for twenty years.
Here is the part that does not get said often enough. The loop is what trained the humans too. You did not get good at Drupal by reading the docs front to back. You got good by posting a question that embarrassed you slightly, getting a reply from someone who had already made the mistake, and remembering the exchange for the next five years.
If the loop closes, the AI loses its source of new material. The humans lose the training ground. Those are two different problems, and the second one is the one the research is starting to document.
The kinds of developer learning AI breaks first
The form of expertise most at risk is pattern recognition. Pattern recognition is the accumulated library of "I have seen this shape before" that senior engineers draw on when a bug does not match any textbook. It requires repetitions, thousands of them, and each repetition needs a feedback signal strong enough to attach to memory. Asking AI, pasting the answer, and shipping is a weak signal. You get the right code without ever forming the pattern.
This same effect shows up in adjacent crafts. Accessibility is the most legible example. ARIA patterns, focus management, and screen-reader behavior all live in a practitioner's head as pattern memory, not as reference lookups. Design systems have the same shape: design tokens, component contracts, and the small decisions that keep a library coherent erode in the same way, because they are learned in the same way.
What is being lost is not "how to write this piece of code." It is the ability to look at a stranger's codebase and feel, in the first ten minutes, which parts are load-bearing.
The Anthropic study on junior engineers
Anthropic's early-2026 study of 52 engineers, mostly junior, is the closest thing we have to a direct measurement. Researchers Judy Hanwen Shen and Alex Tamkin ran a randomized controlled trial in which participants learned a Python asynchronous-programming library (Trio) either with AI assistance or by hand-coding, then took a comprehension quiz. The AI-assisted group averaged 50%. The hand-coding group averaged 67%. The 17-point gap was largest on debugging questions, the place where the inability to reason through broken code shows up first. Participants using AI finished about two minutes faster; the speedup was not statistically significant.
Seventeen points on a comprehension quiz is not a productivity complaint. It is a knowledge gap. The engineers who leaned on AI to learn the library did not form the understanding that makes debugging possible. The study does not claim they are bad engineers. It reports that their foundation on the material is thinner than the cohort that learned it by hand, and the thinness shows up on the kind of question that predicts long-term growth.
The finding is uncomfortable because the business case for letting juniors use AI freely is so strong. They ship faster. Their first-month output looks better. The cost is invisible for six to twelve months, and then it appears all at once, in the first hard bug they cannot reason through.
Why Drupal is the canary in the AI coal mine
Drupal is a good early warning system for this problem because Drupal rewards accumulated knowledge more than almost any other platform we work on. The hook system, the cache API, the configuration-management workflow, the way Views and Entity API compose: these are not things a developer reads about once and retains. They are things you learn by watching a dozen sites fail in small ways and accumulating the instinct for why.
Two places in the Drupal ecosystem carry that accumulated knowledge more than anywhere else. The first is the issue queue on drupal.org, where a maintainer explains in a contrib-module ticket why a proposed patch breaks under a case the submitter did not anticipate. The second is contrib module review culture, where core committers and module maintainers discuss in public whether a pattern is idiomatic. Those two spaces are where the "right way" in Drupal actually lives. Neither of them is in the AI's training corpus in any organized way, and both are quieter than they were three years ago.
Ironstar's 2025 Drupal Developer Survey reports overall AI adoption at 78%, up from 50% the year before. The survey does not break usage down by seniority, but our hiring and review experience does. As our senior developers at Square360 look to AI to help improve productivity, they use AI less for creating from scratch and more for enhancement and debugging, trusting it more than junior developers we've worked with. And they are right to.
That is not a contradiction. Seniors use AI less because they know when it is wrong. They trust it more because they know what to ask for, and they would have caught the error either way. Juniors use it more and trust it less, and the combination of heavy use plus low confidence is a predictor of checked-in code that nobody fully understands.
We saw the shape of this on our own floor recently. One of our senior programmers spent many hours diagnosing why a procedural hook had stopped working. The hook had been generated by an AI assistant, and the code looked right. The problem was that Drupal 11 introduced an object-oriented, attribute-based hook system using the #[Hook] PHP attribute, and procedural hooks are deprecated, with removal expected in Drupal 12. The AI had written the hook in the way that used to work. It had no way to know that the accumulated community knowledge on the new form is still thin, because the new form has not lived in public long enough to generate the thousands of issue-queue exchanges that would train the next model. A senior engineer with pattern memory caught it eventually. A junior engineer without that memory would have filed a ticket, or worse, reinstated the deprecated form and moved on.
This is the through-line from our December piece on the four pieces of dumb stuff developers do when nobody is paying attention. The second category in that piece was the Drupal dabblers, generalists who ship in Drupal without deep knowledge of the hook system, the cache API, or the theme layer, and whose work breaks during updates or bloats the database. What was an individual-developer story six months ago is now a platform-level trend. The dabblers are scaling. AI is the reason.
The compounding costs of "ask AI, ship, move on"
There are three real costs to the "ask AI, ship, move on" pattern, and they compound.
The first is margin and velocity. Over a three- to six-month horizon, a team that leans heavily on AI-generated code without review slows down. The METR finding, perceived 24% speedup and measured 19% slowdown, is the single-task version. At a team level, the slowdown comes from time spent diagnosing code that nobody on the team fully wrote. Margin narrows because senior time is pulled into debugging work that should have been caught earlier.
The second is the compound time investment. The loop that produces a senior engineer runs five to ten years of pattern accumulation. Every time a junior skips the struggle, the fifteen minutes of confusion that would have written a lesson into memory, the accumulation clock resets on that pattern. A team trained this way for two years has engineers with two years of tenure and six months of pattern memory. That gap is invisible on the CV and obvious on the hardest ticket of the quarter.
The third is maintenance amnesia, and it is the one that shows up latest. Code that a developer copied out of an AI conversation does not register in that developer's memory the way code they wrote themselves does. Six months later, when something breaks, the developer reads their own code as if it is a stranger's. They do not remember why they chose that cache tag, because they did not choose it. The model did. The site becomes harder to debug for the person who is supposed to know it best. Multiply that across a site with five years of accumulated choices and the maintenance cost is not linear. It compounds.
Three practices for using AI without losing pattern memory
The recommendation is not "do not use AI." We use AI. Every engineer at Square360 has access to it and uses it daily. The recommendation is that using AI well is a professional skill, and the skill has to be built on top of foundations that are still formed the old way. AI should be used as a tool in a set, not a singular crutch.
Three things we are actually doing.
We keep a human in the loop at the merge boundary. Developers at every level are required to explain AI-generated code back to a reviewer before it merges. Not defend, not justify. Explain, in the developer's own words, what the code does and why the alternatives are worse. That one practice does more to rebuild pattern memory than any training course we have tried.
We treat community participation as professional development, not optional overhead. Posting a question to an issue queue, reviewing a patch in a contrib module, reading through the change records for a new minor release: these are billable, in the sense that they earn a line on the tenure ladder. The cost of that policy is visible in the short term. The benefit is visible across years.
We prefer refactoring existing code over asking AI to regenerate it. The refactor forces the developer to read what is already there, which is the exact activity that AI assistance lets them skip. Evolution, not revolution, applies to code review the same way it applies to platform choices.
The through-line from our December piece on the Drupal dabblers still holds. The problem is not AI. The problem is a training loop that used to produce senior engineers and no longer reliably does. AI is the accelerant. Our job, as the people who hire, train, and build on top of Drupal, is to notice which parts of the old loop are worth keeping, and to keep them running deliberately, even when the easy path is quieter and faster.
The measure of a good decision about AI, for us, is whether it still looks like a good decision five years from now. On the current evidence, "use it, explain it, do not let it replace the struggle" is the position that holds up.
Sources
- Abdallah et al., "Quantifying the Impact of Generative AI on Online Community Knowledge Sharing," PMC / Nature Communications, 2025. Analysis of 58 million Stack Overflow posts across the ChatGPT inflection point.
- METR, "Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity," July 2025. RCT, 16 developers, perceived +24% speedup versus measured −19% slowdown.
- Shen, J.H. and Tamkin, A., "How AI assistance impacts the formation of coding skills," Anthropic Research, January 2026. Randomized controlled trial, 52 engineers (mostly junior), 17-point gap on a comprehension quiz after learning Python's Trio library, largest gap on debugging questions.
- DORA, State of DevOps Report 2025. Individual throughput up, delivery stability slightly down in AI-heavy teams.
- Ironstar, 2025 Drupal Developer Survey. Overall AI adoption 78%, up from 50% in 2024. Seniority-level AI usage breakdowns in the piece are Square360's own hiring-and-review observation, not survey data.
- Clutch and CodeRabbit, 2025 industry reports on AI adoption patterns in agency and product teams.
- DevClass reporting on Stack Overflow traffic and engagement trends, 2023–2025.
- Square360, "Four Pieces of Dumb Stuff Developers Do When You Aren't Paying Attention," December 2025. The companion piece. See the "Drupal dabblers" section for the through-line to the AI discussion.