Everyone is talking about AI agents right now. The pitch is seductive: a fleet of autonomous bots that code, research, and schedule meetings while you sleep. It sounds like the ultimate productivity hack.
As a senior engineer who runs a dev shop in Bangkok, I see the fallout of this hype daily. Clients come to me asking for a 'multi-agent system' to handle their internal logistics or customer support. They want LangChain, they want vector databases, and they want it to talk to their Slack.
I usually talk them out of it.
Don't get me wrong. I use AI every day. It's integral to my stack. But there is a massive difference between integrating an LLM into a feature and unleashing an autonomous agent into your production environment.
Here is why I am skeptical of agents, and what I build instead.
The Hallucination Tax
The biggest issue with agents is non-determinism. When you write a standard Next.js API route, you expect input A to result in output B every single time. That reliability is the foundation of the web.
Agents break that contract. They are probabilistic. You ask an agent to book a meeting, and it might get stuck in a loop trying to find a time slot that doesn't exist, or it might hallucinate a Zoom link.
In a production app, especially for the enterprise clients I work with at Thea Tech Solutions, 'mostly correct' is unacceptable. If an agent overwrites a customer record in Supabase with bad data, I don't just have a bug; I have a data integrity crisis.
The 'Tool' Paradox
Most 'agentic' frameworks today aren't actually agents. They are just wrappers around function calling.
You give an LLM access to a set of tools—like a database cursor or an API endpoint—and a prompt that says 'Use these tools to solve X.' This isn't intelligence; it's just a very expensive, error-prone script.
I recently saw a demo where an agent was supposed to fetch data from a REST API. It worked 80% of the time. The other 20%, it tried to parse the raw HTML as JSON and crashed. If I had written a simple fetch call in TypeScript, it would have worked 100% of the time.
Why introduce a layer of indirection that hallucinates?
The Latency Trap
Users are impatient. We have spent the last decade optimizing for speed. We use Vercel Edge Functions, we cache aggressively with Redis or Cloudflare KV, and we use Server-Sent Events (SSE) to stream data.
Agents are slow. They require multiple steps of reasoning before they even execute a command. An agent might take 15 seconds to 'think' about a database query that I could write in Prisma in 50 milliseconds.
For a user-facing feature, that lag is fatal. If a user clicks a button and waits 10 seconds for a result, they assume the app is broken.
What Actually Works: The 'Copilot' Pattern
Instead of autonomous agents, I advocate for what I call the 'Copilot Pattern.'
This isn't a new concept. It's about keeping the human in the loop and using the LLM as a tool for synthesis and generation, not execution.
I build this using a stack I trust: Next.js, Supabase, and OpenAI's API. No heavy agent frameworks like LangChain or AutoGPT unless absolutely necessary.
Real World Example: Automated Reporting
I had a client who needed a weekly summary of their sales data. They asked for an agent to 'analyze the data and email the team.'
The 'Agentic' approach would have the LLM query the database directly. This is risky. It might query the wrong table or leak PII.
My approach:
Here is the simplified logic:
// app/api/report/route.ts
import { createClient } from '@supabase/supabase-js'
import OpenAI from 'openai'
const supabase = createClient(process.env.SUPABASE_URL!, process.env.SUPABASE_ANON_KEY!)
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY })
export async function POST() {
// 1. Deterministic step: Fetch data safely
const { data: salesData, error } = await supabase
.from('weekly_sales')
.select('*')
if (error) return Response.json({ error: error.message }, { status: 400 })
// 2. Probabilistic step: Generate text
const completion = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [
{ role: 'system', content: 'You are a data analyst. Summarize the following sales data.' },
{ role: 'user', content: JSON.stringify(salesData) }
],
})
return Response.json({ summary: completion.choices[0].message.content })
}
This architecture gives us the best of both worlds. We get the intelligence and flexibility of an LLM, but we constrain it so it can't touch our database schema or delete rows.
When to Actually Use Agents
I am not saying agents are useless. They have a place, but that place is narrow.
I would consider an agent if:
* The environment is sandboxed: The agent can only affect a non-critical system (e.g., organizing tags in a Notion database).
* The cost of failure is low: If it messes up, nobody loses money or data.
* The task is highly variable: If the inputs are so messy that you can't write a schema for them, an agent might be the only way to parse them.
But for 99% of web development? Stick to code.
The Takeaway
The hype cycle will tell you to replace developers with agents. The reality is that agents make your codebase harder to debug, slower to run, and more expensive to host.
Focus on building features that are deterministic by default and probabilistic where it adds value. Use LLMs to write the email, not to decide who to send it to.
Your users will thank you, and your cloud bill on AWS will be a lot lower.