Skip to main content
A stack of frontend frameworks weighed against AI coding agents, with Next.js and React at the foundation

Justin Bartak · AI · April 29, 2026 · 10 min read ·

The AI-Native Stack You Pick Decides Whether AI Compounds or Stalls

Choose Next.js or rewrite in 2027. The data does not give you a third option.

TL;DR

Every AI coding agent on the market was trained on a public corpus dominated by React. Choosing any other frontend stack for AI native work means paying a 1.5x to 2x velocity tax on every feature, every refactor, every quarter. The math is not subtle. The decision is being made in planning meetings right now, often without anyone realizing the stakes.

There are two ways to build for the AI era.

The first is to choose the framework your team already knows. Angular because the existing app is on it. Vue because the lead engineer prefers it. Rails or Django because they have served the company well for a decade. The team is comfortable. The choice feels safe.

The second is to choose the framework that AI coding agents already know. Next.js with React, deployed on Vercel, integrated with whatever cloud is already underneath. The team learns something new. The choice feels risky.

These two paths look similar in a planning doc. They diverge violently over 18 months. And the divergence is set by a force most leadership teams have not internalized yet: the public training corpus that powers every AI coding agent on the market is not neutral.

The corpus is the constraint

Every AI coding agent in production today, Claude Code, Cursor, Copilot, Windsurf, Cline, Aider, was trained on public code. The composition of that corpus determines how well the agent performs on a given framework. The composition is not a fair fight.

React has 129 million weekly npm downloads. The next-largest frontend framework has 11 million. Stack Overflow’s 2025 developer survey puts React at 44.7%, Angular at 18.2%, Vue at 17.6%. W3Techs reports React at 6.2% of all websites tracked, Vue at 1.0%, Angular at 0.2%. Every AI native company that has shipped a streaming chat interface, a tool calling agent, a RAG pipeline, or a generative UI surface in the last two years built it in React.

There is no prompt that fixes this. No fine-tune that closes the gap. The output of the agent reflects the input. The input is React.

This is not a temporary state. The dominance is widening every quarter. Engineers who rely on AI agents migrate toward stacks where those agents work best. That migration produces more React code, which improves the training data further, which improves agent performance further. The flywheel runs in one direction.

The same feature, four stacks

Build a streaming chat interface that calls a tool. The most common AI feature. The numbers below come from my own bench tests across Claude Code and Cursor with identical prompts on each stack. Your mileage will vary by a few points. The shape of the curve will not.

Next.js with React

15 lines. The agent gets it right on the first try roughly 90% of the time.

Vue with Nuxt

30 to 40 lines. The agent gets it right roughly 70% of the time. Vue is the second-best option for AI native work, with a real but unavoidable velocity penalty of 20 to 30%.

Angular

60 to 80 lines. Two to four iterations to reach equivalent functionality. The decorators, modules, and RxJS bridges are underrepresented in training data. The agent confuses them with deprecated patterns.

Server-rendered legacy stacks

Rails, Django, Laravel, .NET MVC. 100+ lines, or a parallel React surface anyway. These stacks do not natively support streaming responses. Most companies who try end up adding a Next.js surface alongside the existing app, which is the parallel-surface pattern this whole argument leads to.

This is not framework loyalty. It is measurable difference in lines of code, iterations to working state, and ongoing defect rate. Same engineer. Same prompt. Same SDK underneath. Different output because the corpus is different.

AI native is not a feature set. It is an architecture.

I have watched this play out at three different companies. The pattern is identical every time.

Year one looks fine. Bolt a chatbot onto the existing stack. Wrap an AI layer around the legacy backend. Demo it to the board. Hit the milestone. The shortcut feels like discipline.

Year two starts breaking. The competitor who took an extra quarter to build on the right foundation is shipping agents that act on user data, generative UI surfaces that adapt in real time, voice interfaces that feel native. The shortcut company is on chatbot v3. Every new AI feature requires more workarounds than the last.

Year three is the rewrite. The foundation problem is no longer hidden. It is the entire problem.

Once the architecture supports streaming responses, every future AI feature inherits that capability for free. Companies that bake it in early add features. Companies that bolt it on late rebuild surfaces.

The companies that win 2027 and 2028 are the ones who built the foundation right in 2026, even when it cost them a quarter of speed on the first deliverable. The foundation choices that compound are specific. Streaming first surfaces. Typed APIs and MCP servers as the integration layer. A frontend stack the AI ecosystem actually invests in. Parallel AI surfaces, not bolted on features.

This is the part most leadership teams miss. AI native is not a feature you ship. It is the substrate the entire product runs on. The product does not have AI. The product is AI. You cannot bolt your way there.

The companies that win already chose

Try to name a flagship AI native product launched in the last two years whose primary frontend is Angular, Vue, or a server-rendered legacy stack.

You cannot.

Look at the four companies closest to the work. Anthropic built Claude on React. They train the model. They could pick anything. OpenAI ships ChatGPT on Next.js. They literally wrote the agents that generate the code. Cursor and Perplexity chose the same stack. The teams with the most freedom to choose anything chose the same thing.

The counter-examples are narrow and prove the point. Some Svelte usage at Bolt.new. A handful of internal tools. Edge cases. Nothing at consumer scale. Nothing where a CEO and a CTO sat down with a clean sheet and said: build the AI native company on Vue. That conversation does not happen.

Engineers at the most respected developer-tools company in the industry chose React when they had unlimited resources to choose anything.

This is not a sample. This is the entire field. The companies closest to the work, with the most resources and the most freedom to choose any framework on the market, all chose the same one. That is not coincidence. That is revealed preference at scale.

What I built on this stack

When we built Taxa, the AI native tax platform that secured $113M in funding, we built it on React, Next.js, Tailwind CSS, Supabase, and Vercel. Not because the stack was trendy. Because every architectural decision we needed to make, streaming output, server actions, typed APIs, edge inference, generative UI, was already a first class pattern in the React ecosystem. We were not fighting the framework to do AI native work. The framework was built for it.

Orbyt, the AI native job search platform I took from idea to launched product in 32 days as a solo build with Claude Code, runs on the same stack. React, Next.js, Tailwind, Supabase, Vercel, Anthropic SDK. The agent generates working code on the first try because the patterns it was trained on are the patterns Orbyt is built on. There is no impedance mismatch. The stack and the agent speak the same language.

That is not an accident. That is the design.

The choice of stack was not a developer preference. It was the only choice that let one person ship a production AI native SaaS in a month. On any other stack, that build is six months minimum. On a server-rendered legacy stack, that build does not happen at all.

The cost in real dollars

Engineers do not sign off on stack changes. Executives do.

The velocity gap on AI assisted development between Next.js and any major non-React stack runs roughly 1.5 to 2x. For a 5 engineer AI team at a fully loaded cost of $250K per engineer, the framework choice is a $500K to $750K per year decision in direct cost, and a multi million dollar decision in time to market terms.

At larger scale these numbers grow linearly. A 50 engineer org running on Angular pays the tax across 50 engineers. A 200 engineer org pays it across 200.

129M

Weekly React npm downloads

11M

Next-largest frontend framework

44.7%

Stack Overflow developer adoption

6.2%

Percent of all websites tracked

This is not a developer preference question. It is a CFO question. Most CFOs do not yet realize they are the ones making it.

Add Vercel to the cloud you already run

The framework decision pulls one more decision behind it. Where the AI surface gets deployed.

This is not a Vercel versus cloud conversation. Whether the company runs on AWS, GCP, or Azure, the existing infrastructure keeps doing its job. VPCs, managed databases, IAM, internal services, data warehouses, GPU inference, vector databases, batch processing. None of it moves. The new Next.js AI surface deployed on Vercel sits on top and calls into it through typed APIs and MCP servers.

What Vercel adds: streaming on day one, preview deployments per pull request, edge runtime and middleware with zero configuration, AI SDK and AI Gateway co-designed with the platform, Vercel aware skills in Claude Code and Cursor. Reproducing this on raw cloud primitives is several months of platform engineering work at $250K per engineer fully loaded.

Paying Vercel is dramatically cheaper than paying a platform engineer to rebuild Vercel poorly.

The cloud team’s expertise stays valuable. The infrastructure spend stays productive. Nothing gets thrown away. The math only flips at very large scale, tens of millions of monthly visitors, and at that scale OpenNext or SST migrate the surface to the existing cloud cleanly. A year-three problem, not a year-one problem.

The recommendation

Build the AI native layer in Next.js with React, TypeScript, and Tailwind, deployed on Vercel, integrated with the cloud you already run. Keep the existing application as the system of record. Expose it through typed APIs and MCP servers. Build new AI experiences as a parallel Next.js surface. Earn the migration over time, or keep the legacy app forever as the system of record.

Standardize the team on Claude Code or Cursor for AI assisted development on the new surface, where the training data leverage is highest. Let engineers who want to learn React migrate to the AI work. Let engineers who prefer the existing stack keep maintaining the legacy product. No one is forced.

Nothing gets thrown away. Everything compounds.

Closing

I have made this argument in living rooms, board rooms, and over Slack DMs with engineering leaders for the better part of two years. The pattern is always the same. The team feels like it should be moving faster than it is. The CTO senses the velocity gap but cannot articulate why. The CFO sees the AI investment growing without proportional output. None of them are wrong. They are all running into the same wall, and the wall is the foundation.

The framework decision is now an AI productivity decision.

Build on intelligence, or decorate with it. There is no middle ground. The decision you make this quarter decides whether your AI work compounds for the next decade or stalls inside the next 18 months.

The stack is not a preference. It is the floor.

Download the full paper

AI Transformation to AI-Native: There Is Only One Clear Choice, and the Decision Is Being Made Right Now

The longer paper goes through the data line by line, the comparisons across all four major stacks, the full Vercel plus cloud integration model, and the common questions teams raise when they consider this shift.

See this in practice: Orbyt, built solo in 32 days and Taxa, the AI native tax platform.

Share this article

XLinkedIn
Justin Bartak, VP of AI and AI-native product leader

Justin Bartak

4x founder and VP of AI. $383M+ in enterprise value delivered across regulated fintech, tax, proptech, and CRM platforms. Recognized by Apple. Built Orbyt solo in 32 days with Claude Code. Founder of Purecraft.