Two things landed in the JavaScript ecosystem recently that I think signal a real shift in how we’ll build frontend products going forward, TanStack AI and Vite+. They solve different problems, but they share the same underlying frustration: the tools we’ve been given weren’t built for the scale and complexity of what we’re actually shipping today.
Part 1: TanStack AI – Stop Picking a Side
Every time I’ve integrated an LLM into a product, the process has felt like a hostage negotiation. Use OpenAI’s SDK directly, and you’re now married to their response shapes and error formats. Pick Vercel AI SDK, and you’re quietly nudging yourself toward a hosting provider. Pick LangChain and… you know.
TanStack AI, which hit alpha in December 2025, is a direct response to this. The team calls it “the Switzerland of AI tooling”, neutral, open source, no platform to migrate to. What they’ve actually built is the AI SDK I wish had existed two years ago.
The core idea is simple: your application logic should have zero opinion about which LLM it’s talking to. You write against TanStack AI’s primitives, and you plug in a provider via adapters openaiText, anthropicText, geminiText, ollamaText. Switching providers means changing one import. Your streaming logic, your tools, your chat state – none of it changes.
import { chat } from '@tanstack/ai'
import { openaiText } from '@tanstack/ai-openai'
const result = chat({
adapter: openaiText('gpt-4o'),
messages: [{ role: 'user', content: [{ type: 'text', content: 'Summarize this.' }] }],
temperature: 0.6,
})
for await (const chunk of result) {
process.stdout.write(chunk)
}
This sounds obvious but I’ve spent non-trivial engineering time rewriting AI integration layers when a product decision changes the provider. That shouldn’t happen.
Type safety that actually goes all the way down
Most AI SDKs give you typed request/response shapes. TanStack AI gives you per-model, per-provider type safety including model specific options. If you switch from a reasoning model to a standard one and forget to remove a reasoning only option, TypeScript flags it at compile time. Not in production at 2am.
Tools as first-class citizens
The toolDefinition() pattern lets you define a tool once with an input/output schema, then wire it to a server or client implementation. When the AI triggers the tool, the SDK executes it automatically, no manual routing, no dispatch boilerplate. There’s also a built-in tool approval flow for human-in-the-loop workflows, and support for client-side tools that run in the browser.
The React hook experience
The useChat hook from @tanstack/ai-react handles the state you’d otherwise wire up yourself: message history, streaming updates, loading state, optimistic updates. You point it at a fetchServerSentEvents connection adapter and it manages the SSE protocol automatically. Clean, minimal surface area.
One honest caveat
TanStack AI is still in alpha. The team shipped multiple internal architecture overhauls in just the first two weeks post-launch. I wouldn’t put it in a production critical path today, but the direction is right, and the team has a proven track record with TanStack Query, Router, and Virtual. I’m prototyping with it now and planning to adopt it seriously once it stabilizes.
Part 2: Vite 8 and Vite+ – A Toolchain Rethink
Two things happened within 24 hours of each other in mid-March 2026, and they deserve to be understood as a single story.
On March 12, Vite 8 went stable. On March 13, VoidZero dropped the alpha for Vite+. Both are worth paying attention to, but for different reasons.
Vite 8: The bundler problem is finally solved
Vite has always had an awkward internal split: esbuild for fast development transforms, Rollup for production builds. Two separate pipelines, two plugin systems, and a growing pile of glue code to keep their behavior aligned.
Vite 8 collapses this. It ships Rolldown, a Rust-based bundler built by the VoidZero team, as its single unified bundler for both dev and production. The numbers are not marketing fluff. From the official Vite 8 release and real-world reports:
- Rolldown is 10-30x faster than Rollup and matches esbuild’s performance
- Linear’s production builds dropped from 46 seconds to 6 seconds
- Ramp reported a 57% reduction in build time
- Beehiiv reported a 64% cut in build duration
For large codebases, this changes the feedback loop in a meaningful way. The migration path is also deliberately smooth, Rolldown supports the same plugin API as Rollup, so most existing Vite plugins work out of the box with Vite 8.
Beyond the bundler, Vite 8 ships some long-wanted quality of life improvements: native tsconfig paths support via resolve.tsconfigPaths: true (no plugin needed), integrated Devtools for bundle analysis and module graph debugging, browser console forwarding to the dev server terminal, and Wasm SSR support.
Vite+: One CLI for your whole toolchain
Vite+ is a separate product from Vite, built by VoidZero on top of it. It’s an open source unified CLI that wraps Vite 8, Vitest, Oxlint, Oxfmt, Rolldown, and tsdown, one binary (vp), one config file.
Here’s the setup most of us have right now: Vite for dev and build, Vitest with its own config, ESLint with a pile of plugins, Prettier with its own config, maybe a custom library bundler, Turborepo or Nx for monorepo tasks. That’s five or six config files, each with separate versioning, each needing to stay compatible with the others.
Vite+ collapses all of it into commands like vp dev, vp build, vp test, vp lint, vp fmt, vp check. It also manages your Node version and package manager automatically. The goal is that vp is the only tool you need to think about.
One number worth highlighting: Oxlint (the linter Vite+ uses) runs 50-100x faster than ESLint. On large repos with thousands of files, this is not a rounding error.
The business model question
Vite stays MIT. Vite+ is a separate commercial product with an open-source core. Evan You has been clear: revenue from Vite+ funds the open-source work underneath it, and open source frameworks, TanStack Start, SvelteKit, etc. can use Vite+ for free in their own development workflows.
I think this is healthier than the alternative, which is volunteer run tooling that quietly burns out maintainers. If Vite+ makes the ecosystem sustainable, I’m for it.

How They Fit Together
These two things don’t directly integrate, but they sit on the same stack naturally. Vite 8 / Vite+ handles your build toolchain. TanStack AI handles the LLM integration layer inside the app you’re building. Both are framework-agnostic, strongly typed, and explicitly anti-lock-in.
A project using both might look like: Vite+ for the full dev/test/lint/build pipeline, TanStack AI for a product search assistant or recommendation feature with useChat in React and adapters pointing at whichever provider you’re evaluating this week. No rewrite required when that evaluation changes.
Where I’d Start
If you’re adding AI features to a React/TypeScript app and tired of being coupled to a specific provider: start with TanStack AI now. Prototype with it, and plan to adopt seriously once it stabilizes out of alpha.
If you’re on Vite and suffering slow production builds: upgrade to Vite 8 today. The migration is low friction and the performance gains are immediate.
If you want to consolidate your entire toolchain into one well-maintained setup: try Vite+ alpha. For new projects especially, it’s the cleanest starting point I’ve seen in the JS ecosystem in a long time.
Neither of these is a silver bullet. But both represent the ecosystem finally building tools that match the problems we’ve been dealing with for years.
You may also like: GitHub Copilot, ChatGPT & Claude: The New Age of AI-Assisted Frontend Development
