deep-dive

The Complete Developer's Guide to Cloudflare Workers in 2025: Features, Patterns, Limits, and Real-World Use Cases

An in-depth look at A complete guide to Cloudflare Workers for developers in 2025

πŸ‘€ AdTools.org Research Team πŸ“… March 05, 2026 ⏱️ 29 min read
AdTools Monster Mascot reviewing products: The Complete Developer's Guide to Cloudflare Workers in 2025

Introduction

Cloudflare Workers has quietly become one of the most consequential platforms in modern web development. What started in 2017 as a way to run JavaScript at the edge β€” intercepting and modifying HTTP requests as they passed through Cloudflare's network β€” has evolved into a full-stack compute platform that now handles millions of production workloads across more than 300 data centers worldwide.

But 2025 has been a particularly inflection-point year. Cloudflare shipped containers, dramatically improved Node.js compatibility, launched Workflows and Pipelines for async processing, introduced Workers VPC for connecting to private infrastructure, and even rebuilt Next.js to run natively on the platform. The ecosystem has matured from "clever edge scripts" to something that legitimately competes with AWS Lambda, Vercel, and traditional cloud compute for a wide range of production workloads.

This guide is for developers who are either evaluating Workers for the first time or who tried it years ago and bounced off its limitations. The platform in 2025 is materially different from the platform in 2022 β€” higher CPU limits, broader Node.js API support, a richer storage ecosystem, and a pricing model that continues to be aggressively developer-friendly. But it's not without tradeoffs. The V8 isolate model that gives Workers near-zero cold starts also imposes real constraints. The free tier that gets developers hooked has limits that matter at scale. And the serverless paradigm itself breaks certain categories of code in ways that trip up even experienced engineers.

What follows is a comprehensive, practitioner-oriented guide: what Workers actually is under the hood, what you can build with it today, where the real limits bite, how the pricing works in practice, what patterns experienced teams are using in production, and when you should reach for something else entirely. Whether you're deploying a side project on the free tier or migrating production infrastructure from Kubernetes, this is the guide I wish existed when I started building on the platform.

Overview

What Cloudflare Workers Actually Is (And Why It's Different)

At its core, Cloudflare Workers runs your code on V8 isolates β€” the same JavaScript engine that powers Chrome β€” rather than in containers or virtual machines. This is the fundamental architectural decision that defines everything about the platform: its strengths, its limitations, and its economics.

Traditional serverless platforms like AWS Lambda spin up a container for each function invocation. That container has a full operating system, a runtime, your dependencies β€” the works. It's flexible, but it's heavy. Cold starts on Lambda can range from hundreds of milliseconds to several seconds depending on runtime and package size. Cloudflare's V8 isolates, by contrast, start in under 5 milliseconds[7]. There's no operating system to boot, no container to provision. Your code runs in a lightweight sandbox that shares the V8 engine with other tenants on the same machine.

This is why Workers feels fast in a way that's hard to replicate on other platforms. Your code runs on whichever Cloudflare data center is closest to the user making the request β€” there are over 300 of them globally β€” and it starts almost instantly. For latency-sensitive workloads like API gateways, authentication checks, A/B testing, and real-time personalization, this architecture is genuinely superior.

Bhanu Teja P @pbteja1998 Sat, 26 Feb 2022 03:29:34 GMT

Cloudflare Workers is amazing!

You can use it as a website.
You can use it as a server.
You can use it as a reverse proxy.
You can do pretty much anything with it.

I regret not knowing about it sooner.

@remix_run makes it super accessible to use it to build excellent websites.

View on X β†’

That enthusiasm captures something real. Workers' versatility β€” website, server, reverse proxy, API β€” comes from the fact that it sits at the network layer. Every HTTP request that hits a Cloudflare-proxied domain can be intercepted, transformed, routed, or handled entirely by a Worker. This is a fundamentally different position in the stack than a traditional backend server.

The 2025 Platform: What's Changed

If you evaluated Workers two or three years ago, the platform has changed substantially. Cloudflare's Developer Week 2025 was arguably their most significant release cycle yet, shipping capabilities that address many of the historical complaints about the platform[1].

Node.js Compatibility has been one of the biggest friction points for adoption. Workers doesn't run Node.js β€” it runs V8 β€” which historically meant that npm packages relying on Node.js built-in modules like fs, net, crypto, or child_process simply wouldn't work. Cloudflare has spent the past year systematically implementing Node.js APIs within the Workers runtime[2]. As of 2025, compatibility has reached the point where many mainstream npm packages work without modification. The nodejs_compat compatibility flag now covers crypto, buffer, stream, events, util, path, string_decoder, url, querystring, assert, and partial implementations of net and tls. This is a dramatic improvement, though gaps remain β€” anything touching the filesystem or spawning child processes still won't work, because those concepts don't exist in the isolate model.

rita kozlov πŸ€ @ritakozlov Tue, 10 Feb 2026 19:29:12 GMT

a team at @cloudflare just moved an internal nextjs app that used to run in k8s to cloudflare workers + workers vpc for the bits that need to connect to core services

this happened in a few hours! few big takeaways:

1. this is the year of migration & rewrites. it's happening!

2. if you don't know where to start with cloudflare workers because you still have things running in cloud / on prem, workers vpc is a good place to get started!

3. if you tried workers a long time ago, now is a good time to try it again. it has gotten so much richer with support for node js, higher limits, ability to connect to internal services, new products like queues, workflows, pipelines, etc

View on X β†’

Rita Kozlov's post captures the current moment well. Teams that bounced off Workers years ago are finding that the platform has grown up. Workers VPC β€” which lets Workers connect to services running in traditional cloud VPCs via Cloudflare's network β€” is particularly significant because it means you don't have to migrate everything at once. You can start running edge logic on Workers while your databases and legacy services remain in AWS or GCP.

Containers represent perhaps the most philosophically interesting addition. Cloudflare has always been the "isolates, not containers" company. But they've recognized that some workloads genuinely need a full Linux environment β€” running Chromium for browser rendering, executing arbitrary Docker images, running AI inference with custom models. Rather than pretending isolates can do everything, they've added containers as a first-class primitive that runs on their edge network[1].

Dane Knecht 🦭 @dok2001 Tue, 24 Jun 2025 18:21:14 GMT

Big day for @Cloudflare as we launch our newest compute primitive, Containers! A bit of history: In 2020, we acquired S2 Remote Browser tech and faced the challenge of migrating it from AWS to our edge. To run a Chromium-based browser securely, we split our team: half focused on the Remote Browser app, half built a robust, independent container platform to support it.

We knew some workloads needed to be close to users but didn’t fit our Workers isolate model. This platform became a game-changer, empowering dozens of internal teams to build features like Workers CI/CD, Browser Rendering, Key Transparency, Workers AI, and more.

But we kept asking: Is this the right primitive for our users? Workers remains the go-to for globally distributed, effortlessly scalable compute at a great price. Initially, many use cases we heard were for single-node webservers that didn’t need region earth. Then we got excited as users started asking for latency-sensitive, real-time applications and the ability to run agents close to the users they serve.

Cloudflare Containers are here to deliver for those high-performance, user-proximal workloads. Excited to see what you build with it!

View on X β†’

Dane Knecht's history here is illuminating. Cloudflare built their container platform internally years ago to support their own products, then spent years asking whether it was the right primitive to expose externally. The answer they landed on is nuanced: Workers isolates remain the default for globally distributed, auto-scaling compute. Containers are for workloads that need full OS access, long-running processes, or specific runtime environments β€” and critically, they still run close to users on Cloudflare's edge network.

Workflows and Pipelines address the async processing gap that has long been a Workers limitation. The 30-second CPU time limit on the paid plan (10ms on free)[7] means that long-running tasks β€” sending emails, processing images, syncing data, generating reports β€” can't run in a single Worker invocation.

PropTechUSA.ai @PropTechUSAAI Wed, 04 Mar 2026 14:17:12 GMT

Cloudflare Workers have a 30 second CPU limit.

That's plenty for most requests. But not for sending emails, processing images, syncing data or generating reports.

Here's how we handle async work across 28 production workers without blocking a single request 🧡

View on X β†’

Workflows let you define multi-step processes where each step runs as a separate Worker invocation, with automatic retries and state persistence between steps. Pipelines provide a managed event streaming primitive. Together, they give Workers something approaching the async processing capabilities that Lambda has had with SQS and Step Functions.

The Storage Ecosystem

One of the most common misconceptions about Workers is that it's "just compute." In reality, Cloudflare has built a comprehensive storage ecosystem that's tightly integrated with the runtime:

Jilles Soeters @Jilles Thu, 20 Nov 2025 00:37:16 GMT

Don’t overthink it.
* host a website on Cloudflare Workers
* deploy a web app on Cloudflare Workers
* build a serverless API on Cloudflare Workers
* set up D1 for your data, or use Hyperdrive
* Automate back-ups with object lifecycles
* Monitor apps using Worker metrics and analytics
* Build a URL shortener using Workers and KV
* Configure a custom domain with 1 click

You can also learn Full Stack Cloudflare by watching videos, but build something too.

View on X β†’

Jilles' advice to "don't overthink it" resonates because the breadth of the platform can be paralyzing. But the core pattern is straightforward: Workers for compute, D1 or Hyperdrive for relational data, KV for configuration, R2 for files, Durable Objects for coordination, Queues for async work.

Pricing: The Loss Leader Strategy

Cloudflare's pricing model for Workers is one of the most developer-friendly in the industry, and it's worth understanding why.

The free tier includes 100,000 requests per day, 10ms CPU time per invocation, and access to limited versions of KV, R2, and D1[6]. The paid plan starts at $5/month and includes 10 million requests, 30 million CPU milliseconds, and significantly higher limits on all storage primitives[8].

Kai @thinklikekai Mon, 02 Mar 2026 20:32:36 GMT

100k worker requests per day

Free

This is what happens when infrastructure becomes a loss leader

Cloudflare isnt selling compute

Theyre buying distribution

Get developers addicted early

Charge them when they scale

View on X β†’

Kai's analysis is sharp. Cloudflare's core business is network services β€” DDoS protection, CDN, DNS, Zero Trust security. These are high-margin products sold to enterprises. The developer platform is a distribution strategy: get developers building on Workers, and their companies are more likely to adopt Cloudflare's enterprise products. This explains why the pricing is so aggressive and why the free tier is so generous.

But the pricing model has a subtlety that practitioners should understand:

dax @thdxr Mon, 23 Oct 2023 12:43:03 GMT

the billing model of cloudflare workers isn’t talked about enough - it’s crazy disruptive

i thought 1ms billing was huge - cf doesn’t bill you for time where your function is waiting in IO (eg your database)

that’s probably like 80% of all my requests - so it’s 80% cheaper

View on X β†’

Dax's observation about the billing model is one of the most important things to understand about Workers economics. Cloudflare bills for CPU time, not wall-clock time. When your Worker is waiting for a database query, an API call, or any I/O operation, the meter stops. On AWS Lambda, you pay for the entire duration of your function execution, including all the time spent waiting for I/O. For typical web applications where most request time is spent waiting on databases and external services, this difference can reduce costs by 50-80%.

The paid plan pricing breaks down to[^6]:

For side projects and small applications, the free tier is genuinely sufficient. For production workloads, costs typically remain very low until you hit significant scale.

Mike Codes @Newaicoder Fri, 27 Feb 2026 15:27:47 GMT

Debug yes, scale to 10,000 customers on cloudflare pages (I had projects with 2000 concurrent users on free tier), yes, edge cases payment no, you lose a few bucks but save on developer costs

correct, kid is the bottleneck, ai will soon be my fav child

View on X β†’

Mike's experience running 2,000 concurrent users on the free tier illustrates the practical economics. For many indie developers and small startups, Workers effectively eliminates infrastructure costs as a concern during the early stages of a product.

Real-World Patterns and Production Use Cases

The conversation around Workers has shifted from "what can I build?" to "how should I architect this?" Here are the patterns that experienced teams are using in production.

Pattern 1: Service Bindings as Internal Microservices

Gabriel Massadas @G4brym Sun, 18 Jan 2026 22:59:12 GMT

I built a self-hosted Sentry clone that runs entirely on Cloudflare Workers, and I think it showcases one of the most underrated features in the Cloudflare ecosystem: Service Bindings.

Let me explain why this matters.

When you have multiple Cloudflare Workers (an API, a webhook handler, a cron job), they all need common things: error tracking, authentication, rate limiting, metrics. The typical solution? External HTTP calls to third-party services. That means:

- 50-200ms latency per call
- Egress fees
- Your data leaving your infrastructure
- Another vendor to manage

Service bindings let Workers call each other directly inside Cloudflare's network. No HTTP. No internet. Just internal RPC with <5ms latency.

With Workers Sentinel, any Worker in my account can just point Sentry-SDK into the Service binding, and have all errors flow into one centralized dashboard, stored in Durable Objects with SQLite. No external calls. No added latency.

Service bindings aren't just for error tracking. You can centralize:

πŸ” Authentication β€” One Worker that validates tokens for all your services

πŸ“Š Metrics β€” Centralized collection without external observability costs

🚦 Rate Limiting β€” Shared counters that actually work across Workers

🚩 Feature Flags β€” Instant propagation, no deployment needed

Think of it as building your own internal microservices mesh, but at the edge, with zero network overhead.

Workers Sentinel uses two Durable Objects:
- AuthState (singleton) β€” users, sessions, projects
- ProjectState (per-project) β€” issues, events, stats

Events are fingerprinted and grouped intelligently. The dashboard is a Vue.js app served from the same Worker.

I could say i built this to learn Durable Objects or that I needed error tracking for side projects, but honestly I just need a way to show my wife why I'm sending $200/month to some guy named Claudio who apparently helps me write code.

The whole thing is open source. Deploy it to your Cloudflare account, point your Sentry SDKs at it, and you're done.

But more importantly: take a closer look at service bindings. They're the glue that turns a collection of Workers into an actual platform. Most Cloudflare customers I talk to aren't using them, and they're missing out.

To the Sentry team: I love your work. Genuinely. Sentry is battle-tested, has incredible features, and is what you should use for anything that matters. This project is a toy. A learning exercise. A weekend hack that got slightly out of hand.

Please do not trust your production errors to this dummy clone. If your startup goes down at 3 AM because Workers Sentinel missed an edge case, that's on you. I warned you. Use the real thing.

But if you want to learn about Durable Objects, service bindings, and how error tracking works under the hood? Clone away.

Your Workers shouldn't be islands. Connect them.

View on X β†’

Gabriel's Sentry clone is a masterclass in Workers architecture. Service Bindings let Workers call each other directly within Cloudflare's network β€” no HTTP, no internet traversal, sub-5ms latency. This turns a collection of Workers into an internal microservices mesh with effectively zero network overhead.

The pattern is powerful: instead of each Worker independently calling external services for auth, logging, rate limiting, and feature flags, you build dedicated Workers for each concern and connect them via Service Bindings. The calling Worker invokes the service Worker's methods directly, as if they were local function calls. This is documented as a best practice in Cloudflare's official guidance[12].

For production architectures, this means you can decompose your application into focused, independently deployable Workers while maintaining the performance characteristics of a monolith. It's the best of both worlds β€” if you structure it well.

Pattern 2: Edge-First Full-Stack Applications

Cloudflare @Cloudflare Tue, 24 Feb 2026 21:54:24 GMT

We rebuilt Next.js in a week. No, really.

The team ported the framework to run natively on Workers to prove what’s possible with edge-first architecture. Dive into the technical hurdles we solved to eliminate Node.js dependencies.

https://blog.cloudflare.com/vinext/?utm_campaign=cf_blog&utm_content=20260224&utm_medium=organic_social&utm_source=twitter/

View on X β†’

Cloudflare rebuilding Next.js to run natively on Workers is a statement of intent. The project (internally called "ViNext") demonstrates that full-stack frameworks can run entirely at the edge, eliminating the need for a centralized origin server[1]. This isn't just a technical demo β€” it's the direction the platform is heading.

The practical implication is that you can deploy a complete Next.js application (or Remix, Astro, SvelteKit, or Nuxt) on Workers with server-side rendering happening at the edge location closest to each user. Combined with D1 for data and R2 for assets, you get a full-stack application with global distribution and no servers to manage.

Cloudflare @Cloudflare Tue, 30 Sep 2025 16:00:03 GMT

Tired of managing servers for your CMS? Now you can run the powerful, open-source @payloadcms entirely on Cloudflare Workers. Deploy your own serverless CMS in one click: https://blog.cloudflare.com/payload-cms-workers/?utm_campaign=cf_blog&utm_content=20250930&utm_medium=organic_social&utm_source=twitter/

View on X β†’

Payload CMS running on Workers is another example of this pattern. A full content management system β€” typically a Node.js application requiring a server and a database β€” now deploys as a Worker with D1 storage. One-click deployment, zero server management, global distribution.

Pattern 3: AI Agents and Stateful Edge Compute

Saeed Anwar @saen_dev Fri, 27 Feb 2026 17:32:31 GMT

Cloudflare Workers for agents is criminally underrated β€” cold start is ~0ms and Durable Objects give you stateful agents without a separate Redis instance.
The gotcha: D1 has no streaming, so long agent runs need a workaround for result pagination.

View on X β†’

The intersection of Workers and AI agents is one of the most active areas of development in 2025. Durable Objects provide the stateful coordination layer that agents need β€” maintaining conversation history, managing tool calls, tracking multi-step workflows β€” without requiring a separate Redis or database instance. The near-zero cold start means agents respond quickly, and the global distribution means they run close to users.

Cloudflare's Workers AI service provides access to open-source models (Llama, Mistral, Stable Diffusion, Whisper) that run on GPUs at Cloudflare's edge locations[11]. Combined with the recently released workers-oauth-provider for MCP (Model Context Protocol) servers, Workers is becoming a natural platform for deploying AI agents.

Matt Carey @mattzcarey Wed, 04 Mar 2026 11:30:39 GMT

πŸ“£ New release of @ cloudflare/workers-oauth-provider - v0.3.0

CIMD support hardened and now explicitly opt in. Works with Claude .ai and any MCP Client that supports CIMD.

+ a bunch of fixes about revoking existing grants and DCR edge cases with client_secret

npm i @cloudflare/workers-oauth-provider@0.3.0 in your MCP servers to upgrade.

View on X β†’

The MCP OAuth provider is significant because it enables Workers to act as authenticated tool providers for AI assistants like Claude. This is the emerging pattern: Workers as the compute layer for AI agents, Durable Objects for state management, Workers AI for inference, and Service Bindings for connecting to other services.

Pattern 4: Non-JavaScript Workloads

Workers supports WebAssembly (Wasm), which means you can compile code written in Rust, C, C++, Go, and other languages to run on the platform.

TrafficDATA.site @TrafficDATAsite Thu, 05 Mar 2026 04:05:08 GMT

Shipped a SaaS product built 100% with @DioxusLabs + Cloudflare Workers.

TrafficDATA β€” real-time bot detection & traffic analytics. Dioxus 0.7 streaming SSR compiled to WASM, running on 300+
edge locations.

One Rust codebase. Zero JS.

https://trafficdata.site/

View on X β†’

Running a full Rust application compiled to Wasm on Workers β€” with streaming SSR, no JavaScript β€” demonstrates the platform's flexibility beyond its JavaScript roots. The workers-rs crate provides Rust bindings for the Workers API[5], and the Wasm support means computationally intensive tasks (image processing, cryptography, data parsing) can run at near-native speed.

The Limits That Actually Matter

Every platform has limits, and Workers' limits are particularly important to understand because they're architectural β€” they stem from the V8 isolate model, not from arbitrary business decisions.

CPU Time Limits: 10ms on the free plan, 30 seconds on the paid plan[7]. This is CPU time, not wall-clock time, so I/O wait doesn't count. But for genuinely CPU-intensive work β€” complex data transformations, heavy JSON parsing, image manipulation β€” 30 seconds can be a real constraint. Workflows help by breaking work into steps, but each step still has the 30-second limit.

Memory: 128MB per Worker invocation[7]. This is the total memory available to your isolate, including your code, its dependencies, and any data you're processing. For most web request handling, this is plenty. For processing large files or datasets in memory, it's a hard wall.

Request/Response Body Size: 100MB on the free plan, and while the paid plan supports larger bodies through streaming, you can't buffer arbitrarily large payloads in memory[7].

No Native Binaries: You can't run arbitrary executables. No FFmpeg, no ImageMagick, no headless Chrome (though Cloudflare provides Browser Rendering as a managed service). This is where Containers come in β€” if your workload needs a full Linux environment, Workers isolates aren't the right primitive.

No Persistent Filesystem: There's no local disk. All persistence must go through KV, R2, D1, or Durable Objects. Code that assumes it can write to /tmp won't work.

No Long-Lived Connections (from Workers): Workers can accept WebSocket connections (especially via Durable Objects), but establishing outbound persistent connections to external services requires care. Hyperdrive helps with database connections, but arbitrary TCP connections are limited.

Harsh Kasana @0xkasana Sun, 01 Mar 2026 17:24:13 GMT

vercel cloudflare are serverless they break so many types of code

vibe coders could never detect them even.

So sometime you need railway, render etc too

View on X β†’

Harsh's point is important and often underappreciated. Serverless platforms β€” both Vercel and Cloudflare β€” impose constraints that break certain categories of code. Libraries that depend on filesystem access, long-running processes, native binaries, or persistent in-memory state won't work on Workers. For developers coming from traditional server environments, these failures can be subtle and hard to diagnose. If your application fundamentally needs a long-running server process, platforms like Railway or Render remain better choices.

The best practices guide published by Cloudflare in early 2026 addresses many of these patterns, providing official guidance on structuring Workers for production use[2].

Workers vs. The Competition

The serverless landscape in 2025 has several serious contenders, and the right choice depends on your specific requirements.

Workers vs. AWS Lambda: Lambda offers more runtime flexibility (Python, Java, .NET, Go natively), higher resource limits (10GB memory, 15-minute execution), and deeper integration with the AWS ecosystem. Workers offers dramatically lower cold starts, CPU-only billing, global distribution by default, and simpler developer experience. For enterprise applications deeply integrated with AWS services, Lambda is the natural choice. For latency-sensitive, globally distributed workloads, Workers has a structural advantage.

glen_miracle @glen_miracle4 Thu, 16 Oct 2025 11:32:06 GMT

Today I decided to run a quick comparison between the two leading serverless platforms in the market: AWS Lamda vs Cloudflare Workers. Let's just say AWS takes the win here.
You can read more
https://5ly.co/blog/aws-lambda-vs-cloudflare-workers/

let me know your thoughts.

View on X β†’

Glen's comparison highlights that "better" depends entirely on what you're optimizing for. Lambda wins on raw capability and ecosystem breadth. Workers wins on latency, pricing model, and developer experience for web-focused workloads.

Workers vs. Vercel/Netlify Edge Functions: Vercel's Edge Functions actually run on Cloudflare's network (Vercel is a Cloudflare customer), but Vercel adds framework-specific optimizations, a more opinionated deployment pipeline, and tighter integration with Next.js. If you're building a Next.js application and want the simplest possible deployment experience, Vercel is hard to beat. If you want more control, lower costs at scale, and access to Cloudflare's full storage ecosystem, Workers gives you more leverage.

Workers vs. Deno Deploy/Fly.io: Deno Deploy uses a similar V8 isolate model and offers comparable cold start performance. Fly.io runs full Linux VMs close to users, offering more flexibility but with traditional container cold starts. Both are excellent platforms; the choice often comes down to ecosystem preferences and specific feature requirements.

Getting Started: The Practical Path

For developers new to Workers, the onboarding experience has improved significantly. The wrangler CLI is the primary development tool[^3]:

```bash

npm create cloudflare@latest my-worker

cd my-worker

npx wrangler dev # local development with hot reload

npx wrangler deploy # deploy globally in seconds

```

A minimal Worker looks like this:

```javascript

export default {

async fetch(request, env, ctx) {

const url = new URL(request.url);

if (url.pathname === '/api/hello') {

return Response.json({ message: 'Hello from the edge!' });

}

return new Response('Not found', { status: 404 });

}

};

```

The env parameter gives you access to all bound resources β€” KV namespaces, D1 databases, R2 buckets, Durable Objects, Service Bindings, and secrets. The ctx parameter provides waitUntil() for fire-and-forget async work that continues after the response is sent.

For full-stack applications, the recommended path in 2025 is to use a framework adapter. Remix, Astro, SvelteKit, Nuxt, and (increasingly) Next.js all have Workers-compatible deployment targets. These frameworks handle routing, server-side rendering, and static asset serving, while Workers provides the runtime and Cloudflare's storage primitives provide the data layer.

Vratesh Ghadge @VrateshGhadge Wed, 04 Mar 2026 17:12:43 GMT

Leveling up my serverless database skills! ⚑️

Combined Prisma ORM with Cloudflare Workers today. Progress:

> Prisma schemas & migrations
> Serverless Connection Pooling
> Prisma Accelerate setup
> wrangler.toml configs

Edge computing is the future! πŸ‘‡

#Prisma #Cloudflare

View on X β†’

Prisma with Workers via Prisma Accelerate is a common pattern for teams that want to use an ORM with an existing PostgreSQL database. Hyperdrive is the alternative for direct database connections without an ORM. Both approaches let you keep your existing database while running compute at the edge.

judah @joodalooped Mon, 21 Apr 2025 05:15:52 GMT

btw if you’re looking for side project ideas or any kind of useful thing to make

Cloudflare’s Workers ecosystem has basically changed the game on self-hosting costs, and minimal alternatives to overpriced stuff are accessible to wayyy more people than before

View on X β†’

Judah's observation about self-hosting costs captures why Workers has become popular for side projects and indie products. The combination of generous free tiers, zero egress on R2, and CPU-only billing means you can run meaningful applications for little to no cost. The ecosystem of templates and one-click deploys[13] lowers the barrier further.

Advanced Patterns for Production

Cron Triggers: Workers can run on a schedule using Cron Triggers, executing at specified intervals without an incoming HTTP request. This is useful for data synchronization, cleanup tasks, report generation, and health checks. Each cron invocation is subject to the same CPU time limits as regular requests.

Tail Workers: For observability, Tail Workers receive logs from other Workers after they complete execution. This lets you build custom logging, analytics, and error tracking pipelines without adding latency to your primary request path.

Smart Placement: By default, Workers run at the edge location closest to the user. Smart Placement automatically moves Worker execution closer to your backend services (databases, APIs) when the Worker spends most of its time communicating with a centralized backend. This reduces round-trip latency to your data sources at the cost of slightly higher latency to the end user β€” a worthwhile tradeoff for data-heavy applications.

Static Assets: Workers now supports serving static assets directly, without requiring Cloudflare Pages as a separate product[4]. This means a single Worker can serve your frontend assets and handle API requests, simplifying deployment and routing.

sunil pai @threepointone Thu, 26 Sep 2024 13:50:34 GMT

They did it: Static assets. With regular workers projects. Which means access to all the other cloudflare services, no compromises.

You should build your next website/app/idea on Workers.

View on X β†’

Sunil Pai's excitement about static assets is well-placed. This was a long-requested feature that eliminates the awkward split between Workers (for compute) and Pages (for static sites). Now you can build a complete application β€” frontend and backend β€” as a single Workers project with access to all of Cloudflare's services.

The Ecosystem and Community

The Workers ecosystem in 2025 extends well beyond Cloudflare's own products. The workerd runtime (the open-source Workers runtime) can run locally for development and testing. The Miniflare simulator provides local emulation of Cloudflare-specific APIs. And the community has built a rich ecosystem of libraries, templates, and tools.

Notable community projects include:

The Cloudflare Workers Discord and community forums are active, and Cloudflare's developer documentation[14] has improved substantially, with architecture guides, demos, and best practices covering common patterns[13].

When Not to Use Workers

Intellectual honesty requires acknowledging where Workers isn't the right choice:

  1. Long-running compute: If your workload needs more than 30 seconds of CPU time per operation, Workers isn't suitable without significant architectural changes (breaking work into Workflow steps, using Containers, or offloading to external compute).
  1. Heavy native dependencies: If your application depends on native binaries (FFmpeg, Puppeteer with custom Chrome, scientific computing libraries), you need Containers or a traditional server.
  1. Large in-memory datasets: The 128MB memory limit means you can't load large datasets into memory for processing. Use streaming patterns or external compute.
  1. Existing complex infrastructure: If you have a mature Kubernetes deployment with dozens of services, extensive monitoring, and established deployment pipelines, migrating to Workers is a significant undertaking. Workers VPC can help bridge the gap, but it's not a drop-in replacement.
  1. Regulatory requirements: Some compliance frameworks require compute to run in specific geographic regions. Workers runs globally by default, and while jurisdiction restrictions exist, they're not as granular as choosing a specific AWS region.

Conclusion

Cloudflare Workers in 2025 is no longer an edge computing experiment β€” it's a mature, full-stack platform with a compelling economic model and a rapidly expanding capability set. The V8 isolate architecture that defines Workers provides genuine advantages in cold start performance and global distribution that container-based platforms can't easily replicate. The CPU-only billing model is structurally cheaper for I/O-heavy web workloads. And the storage ecosystem β€” KV, R2, D1, Durable Objects, Queues, Hyperdrive β€” provides the building blocks for complete applications without leaving Cloudflare's network.

The most significant development in 2025 isn't any single feature β€” it's the platform reaching a tipping point of completeness. Node.js compatibility has crossed the threshold where most npm packages work. Containers fill the gap for workloads that don't fit the isolate model. Workflows and Pipelines handle async processing. Workers VPC connects to existing infrastructure. Static assets eliminate the need for a separate hosting solution. The result is a platform where the answer to "can Workers do this?" is increasingly "yes" rather than "not yet."

But Workers isn't the right tool for every job. The isolate model imposes real constraints β€” memory limits, no filesystem, no native binaries, CPU time caps β€” that matter for certain workloads. The serverless paradigm itself introduces complexity that traditional servers don't have. And Cloudflare's aggressive pricing, while genuinely developer-friendly, is also a strategic play to build platform lock-in through developer adoption.

For developers evaluating Workers today, the practical advice is straightforward: start with a side project or a non-critical workload. Deploy something. Hit the limits. Understand the programming model. Then decide whether it fits your production needs. The free tier is generous enough to learn on, the deployment experience is fast enough to iterate quickly, and the platform is mature enough to run real workloads. The edge is no longer the future of compute β€” for a growing number of applications, it's the present.


Sources β–Ό

Sources

[1] Developer Week 2025 wrap-up - The Cloudflare Blog β€” https://blog.cloudflare.com/developer-week-2025-wrap-up

[2] New Best Practices guide for Workers Β· Changelog - Cloudflare Docs β€” https://developers.cloudflare.com/changelog/post/2026-02-15-workers-best-practices

[3] Cloudflare Workers Development Guide 2025 - Complete Tutorial β€” https://www.clodo.dev/cloudflare-workers-development-guide

[4] Developer Week 2025 Recap: Everything Cloudflare Just Shipped β€” https://flaredup.substack.com/p/developer-week-2025-recap-everything

[5] Releases Β· cloudflare/workers-rs β€” https://github.com/cloudflare/workers-rs/releases

[6] Pricing Β· Cloudflare Workers docs β€” https://developers.cloudflare.com/workers/platform/pricing

[7] Limits Β· Cloudflare Workers docs β€” https://developers.cloudflare.com/workers/platform/limits

[8] Workers & Pages Pricing | Cloudflare β€” https://www.cloudflare.com/plans/developer-platform

[9] Cloudflare AI Gateway Pricing (2026): Costs & Limits - TrueFoundry β€” https://www.truefoundry.com/blog/cloudflare-ai-gateway-pricing

[10] Cloudflare Pricing 2025: Don't Pay for Pro/Business Until You Read ... β€” https://eastondev.com/blog/en/posts/dev/20251201-cloudflare-pricing-compare

[11] Limits Β· Cloudflare Workers AI docs β€” https://developers.cloudflare.com/workers-ai/platform/limits

[12] Workers Best Practices β€” https://developers.cloudflare.com/workers/best-practices/workers-best-practices

[13] Cloudflare Workers: Demos and Architectures β€” https://developers.cloudflare.com/workers/demos

[14] The Ultimate Guide to Cloudflare Workers - Edge Computing Made Easy β€” https://medium.com/@calebrocca/the-ultimate-guide-to-cloudflare-workers-edge-computing-made-easy-da2469af7bc0


References (15 sources) β–Ό