I’ve spent the last decade treating the AWS Console like my second home. I know the difference between an m5.large and c5.xl by heart, and I can architect a VPC in my sleep. But recently, I made a controversial decision for my core infrastructure at Thea Tech Solutions: I moved from AWS to Cloudflare.
This wasn't a move born out of hype. It was a calculated decision driven by one specific bottleneck: latency. When you are building high-performance applications using Next.js and server-side rendering, the milliseconds it takes for your origin server to wake up and respond matter more than you think.
In this post, I’m going to break down exactly why I made the switch, the architecture I’m running now, and the specific scenarios where Cloudflare beats AWS—and where it doesn’t.
The Problem with the "Traditional" AWS Stack
For years, my standard architecture for a SaaS product looked like this:
* Frontend: Next.js hosted on AWS Amplify or EC2.
* Database: Amazon RDS (PostgreSQL).
* Storage: S3 buckets for user assets.
* CDN: CloudFront.
This works. It’s stable. It’s the industry standard. But it creates a specific physics problem. Even with CloudFront, your static assets are cached at the edge. However, your dynamic requests—API routes, server-side rendering, authentication checks—still have to travel back to us-east-1 (or whatever region you picked).
If your user is in Singapore and your database is in Virginia, that’s a round trip of roughly 250ms just to get a "Hello World". You can cache all you want, but the first byte is always slow.
Why I Moved from AWS to Cloudflare
The primary driver for my migration was the rise of the Edge. When I look at the tools I use daily—Supabase for backend-as-a-service and Next.js for the frontend—I realized I was spending too much time managing servers and not enough time optimizing logic.
Moving to Cloudflare allowed me to shift from a "region-based" architecture to a "global" architecture.
1. Real Latency Wins
I ran a test on a React Native app I manage for a client. We had an API endpoint that fetches user profile data.
* AWS (API Gateway + Lambda @ us-east-1): Average response time from Bangkok: 340ms.
* Cloudflare Workers: Average response time from Bangkok: 45ms.
The difference isn't just speed; it's user experience. 300ms is perceptible lag. 45ms feels instant.
2. The Developer Experience (DX) Factor
I’m a sucker for good DX. AWS is powerful, but it is complex. Configuring IAM roles, setting up VPC peering, and managing container definitions takes time.
Cloudflare’s ecosystem is different. Writing a Cloudflare Worker feels like writing a standard JavaScript function. Deploying is instantaneous. There is no "cold start" in the traditional sense because the edge is already awake.
The New Architecture: Next.js on the Edge
So, what does the stack look like now? If you are running a modern web app, you don't need a monolithic EC2 instance anymore.
Here is how I architect a typical project now:
* Frontend: Next.js using Vercel or Cloudflare Pages.
* API/Logic: Cloudflare Workers (running Hono or Node.js compatibility).
* Database: Supabase (connected via Neon or Supabase's edge functions) or Cloudflare D1 (SQLite at the edge).
* Storage: Cloudflare R2 (S3 compatible, but zero egress fees).
Handling D1 and R2
One of the biggest fears people have is leaving AWS RDS. Postgres is amazing. However, for many read-heavy applications, Cloudflare D1 is a game changer. It is SQLite distributed at the edge.
For example, I built a URL shortener recently. In AWS, this would require an EC2 instance, a database, and an Elastic Load Balancer. On Cloudflare, it is a single Worker with a D1 binding.
Example Worker Code:// src/index.js
export interface Env {
DB: D1Database;
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const { pathname } = new URL(request.url);
if (pathname === "/api/shorten") {
// Logic to shorten URL
return Response.json({ success: true });
}
// Direct database access from the edge
const { results } = await env.DB.prepare("SELECT * FROM links WHERE code = ?").bind(pathname.slice(1)).all();
if (results.length === 0) {
return new Response("Not found", { status: 404 });
}
return Response.redirect(results[0].target_url);
},
};
This code runs physically close to the user. There is no routing request to Ohio or Oregon. It executes in Singapore for a Singaporean user.
The Cost Breakdown: AWS vs. Cloudflare
Let’s talk money. Founders and CTOs care about the bottom line. AWS is expensive if you don't manage it tightly.
Scenario: A small SaaS app with 100k daily requests and 500GB of data transfer.AWS Costs (Estimate):
* EC2 / t3.medium: $30/month.
* Data Transfer Out: $50-$85/month (AWS charges high egress fees).
* Load Balancer: $18/month.
* Total: ~$130 - $150/month.
Cloudflare Costs (Estimate):
* Workers Paid Tier: $5/month (includes 10 million requests).
* R2 Storage: $10/month.
* R2 Egress: $0.
* Total: ~$20 - $30/month.
The savings here are significant. The lack of egress fees on Cloudflare R2 is the killer feature. If you are serving media or large assets, AWS S3 egress fees will eat your margin.
When You Should Stay on AWS
I don’t want to sound like a blind fanboy. AWS is still the king for a reason. If you are a founder or CTO deciding on this, here is when you should not move to Cloudflare:
The Migration Strategy
If you decide to make the jump, do not rip the band-aid off. Do it incrementally.
Conclusion
I moved from AWS to Cloudflare because I wanted to build faster, cheaper, and closer to my users. The shift from "server-centric" to "network-centric" computing is undeniable.
However, this isn't a one-size-fits-all solution. If you are building a complex ML pipeline or a heavy enterprise backend, AWS remains the superior choice. But if you are a startup running a standard Next.js or React Native stack with a need for speed and low overhead, the edge is where you should be.
The infrastructure landscape is changing. The monolithic cloud is being eaten by the edge. Don't get left behind waiting for your EC2 instance to boot.
Takeaway: Stop paying for latency you don't need. If your stack is modern JavaScript, the edge is ready for you.If you are struggling to decide on your architecture or want to optimize your current cloud spend, you need a second pair of eyes.
Book a free AI audit at theatechsolutions.com/ai-audit