Skip to main content
Back to Blog

How I Host My Portfolio for $0/Month on Cloudflare

7 min read
by Divanshu Chauhan
Cloudflare Workers Next.js Edge Computing Serverless Free Hosting OpenNext Portfolio Web Development Cost Optimization 2025
Featured image for How I Host My Portfolio for $0/Month on Cloudflare

TL;DR

I host my Next.js portfolio on Cloudflare Workers completely free using OpenNext, getting global edge deployment and zero cold starts without paying Vercel's premium.

Key Takeaways

  • Cloudflare Workers free tier: 100K requests/day with zero bandwidth limits beats Vercel's 100GB cap
  • V8 isolates mean 5ms cold starts vs 200-500ms for traditional serverless functions
  • OpenNext adapter makes Next.js work on edge runtime with minimal config changes
  • Edge runtime constraints require prebuild strategy - no filesystem access at runtime
  • Global deployment to 330+ cities happens automatically without multi-region setup

I haven’t paid for hosting in two years. Not because I’m using some sketchy free trial, but because my portfolio genuinely costs $0/month to run on Cloudflare Workers.

This isn’t a flex post. It’s the architecture breakdown of how divkix.me runs on Cloudflare’s edge network with Next.js 15, why I picked it over Vercel, and the actual constraints you’ll hit.

The Stack Nobody Tells You About

Here’s what powers this site:

  • Next.js 15 (App Router, RSC)
  • OpenNext (@opennextjs/cloudflare) - the adapter that makes Next.js work on Workers
  • Cloudflare Workers - edge runtime using V8 isolates
  • Wrangler - Cloudflare’s deployment CLI

The secret sauce is OpenNext. It’s an open-source adapter that converts Next.js builds into formats that edge runtimes understand. You can’t just deploy Next.js to Cloudflare Workers directly. OpenNext bridges the gap.

Why Not Vercel? (The Real Reasons)

Vercel hosts Next.js perfectly. But here’s why I left.

The bandwidth limit hits in weird ways. 100GB sounds generous until you add images or get any kind of traffic spike. Cloudflare has no bandwidth cap — period.

Vercel owns Next.js. That’s fine for integration, but it means you’re a captive customer. If pricing or terms change, your migration options are limited. Cloudflare Workers runs on open web standards.

Cold starts are the practical difference. Vercel’s serverless functions use containers — 200-500ms on a cold start. Cloudflare Workers use V8 isolates, which are essentially lightweight JS contexts. Cold starts are 5ms. Not 5 seconds. 5 milliseconds.

And global edge is just the default on Cloudflare. Vercel charges extra for it. Workers deploy to 330+ cities automatically with no configuration.

The Free Tier Reality Check

Let’s compare the actual numbers:

FeatureCloudflare WorkersVercelNetlify
Requests/Day100,000Unlimited*Unlimited*
BandwidthUnlimited100GB100GB
Function Invocations100K/day100 hours compute125K/month
Cold Start Time~5ms200-500ms200-500ms
Global EdgeYes (330+ cities)$20/mo add-onPaid plans only
Overage Cost$0.50/1M requestsPay-as-you-goPay-as-you-go

*Vercel/Netlify limit bandwidth, not requests. Hit 100GB and you’re throttled or billed.

For a portfolio or blog, you’ll never hit 100K requests/day unless you’re Hacker News frontpage famous. I average 2-3K requests/day. Not even close.

Edge Runtime Constraints (The Pain Points)

Cloudflare Workers run on the edge. That means no Node.js. No filesystem. No fs.readFileSync(). This breaks a lot of Next.js patterns.

The Blog Problem

My blog uses MDX files. Typical Next.js pattern:

// This DOES NOT WORK on Cloudflare Workers
import fs from 'fs';
import path from 'path';

export function getBlogPosts() {
  const files = fs.readdirSync('content/blog');
  return files.map(file => {
    const content = fs.readFileSync(`content/blog/${file}`);
    return parseMDX(content);
  });
}

No fs module at runtime. The solution? Prebuild everything.

The Prebuild Pattern

I wrote a build script that runs before deployment:

// scripts/generate-posts-metadata.js
import fs from 'fs';
import path from 'path';
import matter from 'gray-matter';

const postsDir = 'content/blog';
const files = fs.readdirSync(postsDir).filter(f => f.endsWith('.mdx'));

const posts = files.map(filename => {
  const content = fs.readFileSync(path.join(postsDir, filename), 'utf8');
  const { data } = matter(content);
  return {
    slug: filename.replace('.mdx', ''),
    ...data,
    readingTime: calculateReadingTime(content)
  };
});

fs.writeFileSync('content/blog/posts.json', JSON.stringify(posts, null, 2));

Now at runtime, I just import the JSON:

// lib/content.ts
import postsData from '@/content/blog/posts.json';

export function getAllPosts() {
  return postsData; // No filesystem needed
}

This runs at build time with Node.js, outputs static JSON, and the edge runtime only reads JSON. Problem solved.

Wrangler Config Basics

Here’s the minimal wrangler.jsonc config:

{
  "name": "divkix-me",
  "compatibility_date": "2025-11-24",
  "compatibility_flags": [
    "nodejs_compat",
    "global_fetch_strictly_public"
  ],
  "assets": {
    "directory": ".open-next/assets"
  }
}
  • nodejs_compat enables some Node.js APIs (Buffer, process.env)
  • global_fetch_strictly_public enforces standards-compliant fetch
  • assets.directory points to OpenNext’s build output

OpenNext generates .open-next/ folder with all Worker-compatible assets. Wrangler uploads it.

Performance: The Actual Numbers

I ran tests from 5 global locations. Here’s reality:

Homepage (SSG)

  • San Francisco: 23ms
  • London: 31ms
  • Singapore: 28ms
  • Mumbai: 35ms
  • São Paulo: 42ms

Blog Post (SSR)

  • San Francisco: 45ms
  • London: 52ms
  • Singapore: 48ms
  • Mumbai: 61ms
  • São Paulo: 58ms

These are total response times, not TTFB. Cold starts are invisible. V8 isolates are fast.

For comparison, my old Vercel setup averaged 80-120ms on dynamic routes because of container cold starts.

Build and Deploy Commands

My package.json scripts:

{
  "scripts": {
    "prebuild": "node scripts/generate-posts-metadata.js",
    "build": "bun run prebuild && next build",
    "preview": "bun run build && wrangler pages dev .open-next/assets",
    "deploy": "bun run build && wrangler pages deploy .open-next/assets"
  }
}

Workflow:

  1. bun run prebuild - generates posts.json from MDX
  2. next build - Next.js builds app
  3. OpenNext transforms output (happens automatically via next.config)
  4. wrangler pages deploy - uploads to Cloudflare

First deploy took 2 minutes. Updates take 30-45 seconds.

The Honest Downsides

1. Debugging Is Harder

Local development uses Node.js. Production uses V8 isolates. Sometimes code works locally but breaks on Workers. You’ll need to test with wrangler pages dev before deploying.

2. No Incremental Static Regeneration (ISR)

Next.js ISR doesn’t work on Workers. You get static or fully dynamic. No middle ground. For a portfolio, this doesn’t matter. For a high-traffic blog, you’ll need full SSR or static builds.

3. OpenNext Is Community-Maintained

Vercel isn’t maintaining this. The community is. Updates lag behind Next.js releases. I’m on Next.js 15.0, OpenNext adapter works, but edge cases exist.

4. Limited Node.js APIs

nodejs_compat flag enables some APIs, but not everything. No child processes, no native modules, no complex crypto. Check compatibility before committing.

5. Build Times

OpenNext adds 10-15 seconds to build time. Not terrible, but noticeable. Vercel builds are faster because they control the entire stack.

When You’d Actually Pay

Cloudflare charges after 100K requests/day. Let’s math this out:

  • 100K requests/day = 3M requests/month (free)
  • Next 10M requests = $5
  • 13M requests/month = $5 total

Compare to Vercel Hobby (free) → Pro ($20/mo) jump. No middle ground.

For context, a site getting 13M requests/month is doing 430K requests/day. That’s 180 requests/minute every minute of every day. Your portfolio won’t hit this unless it’s not a portfolio anymore.

The Migration Path

If you’re on Vercel now:

  1. Install OpenNext: bun add -D @opennextjs/cloudflare
  2. Create wrangler.jsonc with basic config
  3. Audit your code for fs, path, Node.js APIs
  4. Move filesystem operations to prebuild scripts
  5. Test locally: bun run preview
  6. Deploy: bun run deploy
  7. Add custom domain in Cloudflare dashboard

I migrated in 3 hours. Most of that was rewriting the blog system to use prebuild JSON instead of runtime filesystem reads.

Should You Do This?

Worth it if you want $0/month with no asterisks, you’re building a portfolio or blog or anything low-traffic, and you’re comfortable with edge runtime constraints. The cold start speed and global edge are genuinely nice, not just marketing.

Probably not worth it if you need ISR, rely on Node.js-specific libraries, want zero-config deployment (Vercel is much easier there), or have a site that makes heavy use of database connections — Workers have connection pooling limits that bite you at scale.

For divkix.me, Cloudflare Workers is perfect. No hosting bills, global performance, and the constraints force better architecture decisions. I prebuild everything anyway. Why not make it official?

The free tier isn’t a trial. It’s permanent. Cloudflare makes money from enterprises, not personal portfolios. Use that to your advantage.


Resources:

The hosting bill that doesn’t exist? That’s not a hack. That’s just picking the right tool for the job.

Frequently Asked Questions

Can I use Next.js App Router with Cloudflare Workers?

Yes, but with constraints. Server components work fine, but you can't use Node.js APIs like 'fs' at runtime. OpenNext handles the adaptation automatically.

What happens when I exceed 100K requests/day?

Cloudflare charges $0.50 per million requests after that. For a portfolio, you'd need viral traffic to hit this. I've never exceeded it.

Do I need to change my Next.js code significantly?

Minimal changes. Main constraint is no filesystem access at runtime. Use prebuild scripts to generate JSON from MDX/markdown instead of reading files dynamically.

How does performance compare to Vercel?

Equal or better. Both use edge networks, but Cloudflare's V8 isolates have faster cold starts than Vercel's serverless functions. Real-world: 20-50ms response times globally.

Divanshu Chauhan

Divanshu Chauhan (@divkix)

Software Engineer based in Tempe, Arizona, USA. More about divkix