How I integrate AI into existing products without rewriting anything

By Ahmed "Riz" Ratul · 2026-03-24 07:48:47 · AI, Integration, RAG, LLMs

Most AI features don't need a ground-up rebuild. Here's how I bolt intelligence onto products that are already in production.

Every week I get a message from a founder asking about "adding AI" to their product. Usually they expect it means rebuilding half their stack. It doesn't.

The pattern that works

Most AI integrations follow the same shape: intercept data at a natural boundary, run it through an LLM or embedding pipeline, and return the result to the existing flow. No new databases. No new frameworks. Just a new layer.

Here are three real examples from my client work.

1. Product search that actually understands intent

A WooCommerce store with 3,000+ SKUs. Customers typed "waterproof jacket for hiking" and got nothing because the product title said "Alpine Pro Shell - Men's Outerwear." Classic keyword mismatch.

The fix: Ingested all product descriptions into Supabase pgvector. When a customer searches, their query gets embedded and matched semantically against the product catalog. The existing WooCommerce frontend didn't change at all — I replaced the search endpoint. Time to ship: 2 weeks. Cost to run: ~$12/month on embedding API calls.

2. Automated compliance screening

A FinTech client was spending 4 hours per customer on manual AML screening — cross-referencing names against sanctions lists, PEP databases, and adverse media. Human reviewers were burning out.

The fix: Built an AI agent that screens against 1.7M+ sanctions records, generates a risk score, and flags only the cases that need human review. The existing compliance dashboard stayed the same — it just got a new "AI Score" column and an "Auto-cleared" filter. Time to ship: 6 weeks. Cost to run: Pennies per screening. Time saved: 3.5 hours per customer.

3. Real-time chat translation

A platform connecting Thai trainers with international students. Language barrier was killing engagement. Users would sign up, try to message a trainer, get a Thai response, and leave.

The fix: Supabase Edge Function intercepts every chat message, detects the language, and translates via Claude Haiku through OpenRouter. Both original and translated text are stored. The chat UI shows the reader's preferred language. Sub-500ms latency. Time to ship: 1 week. Cost to run: ~$0.01/day for 10,000 messages.

What these have in common

None of them required rewriting the product. All of them used the existing database, the existing frontend, the existing auth. The AI layer sits alongside the product, not underneath it.

When AI integration gets expensive

It gets expensive when you treat it as a product rewrite. New database, new API, new frontend — suddenly you're rebuilding the product and the AI feature is 10% of the work.

The other trap: building a "ChatGPT wrapper" that doesn't actually know anything about your product. A generic chatbot with a system prompt is not AI integration. It's a toy.

The right question to ask

Don't ask "should we add AI?" Ask: "Where in our product do humans spend time on tasks that follow a pattern?" That's where AI earns its place.


If you have a product and you're wondering where AI actually makes sense, I do a free 30-minute Technical AI Audit. No pitch deck. We look at your product together and I tell you where AI would (and wouldn't) add value.

Book a Technical AI Audit