Carmine Paolino makes a compelling case that Ruby is the best language for building AI applications in 2026 — not because of ML training (that’s Python’s domain), but because modern AI development is just HTTP calls, and everything around it is web engineering.
The core argument: LLM integration is an HTTP call. What matters is everything around it — streaming responses, persisting conversations, tracking costs, switching providers. That’s where Rails shines.
The comparison of RubyLLM vs LangChain vs Vercel AI SDK is striking. Ruby’s API reads like prose:
RubyLLM.chat.ask "Hello!"
While Python requires ceremony — specifying providers, wrapping messages in classes, dealing with inconsistent response structures across providers (LangChain returns token usage under different keys depending on the model provider, and Gemini doesn’t return it at all).
A few points that stood out:
- Cognitive overhead compounds. Fewer abstractions means faster onboarding, fewer bugs, and easier debugging at 2AM.
- Rails gives you the rest of the product. Auth, billing, background jobs, streaming UI, persistence — all solved. The streaming chat example with ActiveJob + ActionCable is absurdly concise.
- Scalability isn’t an issue. LLM workloads are network-bound, not CPU-bound. Ruby’s Fiber-based async handles high concurrency without thread explosion.
The social proof is hard to ignore: RubyLLM hit #1 on Hacker News, ~3,600 stars, 5 million downloads. Teams that migrated from LangChain aren’t going back.