What Is Cloudflare Workers? A Complete Guide for 2026
Cloudflare Workers explained: how it runs at the edge, why teams are migrating, and where it fits best in modern stacks. Learn

Why Teams Are Switching: Infrastructure Friction Is the New Bottleneck
For a long time, developer infrastructure was sold primarily on technical superiority: lower latency, better scaling, more control, more flexibility. Those things still matter. But if you read what practitioners are actually saying now, the emotional center of the conversation has shifted.
The complaint is no longer, âmy code is hard to write.â It is, âeverything around the code is slowing me down.â
That distinction matters because it explains a lot of Cloudflare Workersâ current momentum. Workers is not winning attention only because it runs code close to users. It is winning because it promises to remove a whole category of operational drag: container setup, CI pipeline wrangling, custom deployment glue, region management, TLS configuration, fleet scaling, and the endless paper cuts that make simple changes feel expensive.
In the age of AI and coding agents, your tech stack barely matters anymore. What matters is that it gets out of your way. I build production pilots for clients, sometimes 2-3 running in parallel. The bottleneck is never writing code anymore. It's everything around it. Waiting for builds. Debugging CI. Fighting deployment configs at midnight. I moved everything to Cloudflare Workers and it genuinely changed how I work. wrangler deploy and it's live. Globally. In seconds. No Docker, no GitHub Actions, no infra to think about. When you're shipping updates 10+ times a day across multiple projects, that friction compounds. Removing it is the real productivity hack, not the AI.
View on X âThis is a familiar feeling for teams shipping modern apps in 2026. AI-assisted coding has accelerated the production of code itself. Small teams can generate prototypes, integrations, and internal tools far faster than they could even two years ago. The new bottleneck is delivery: getting that code packaged, deployed, reachable, secure, and globally available without inventing more infrastructure than the product deserves.
Cloudflare Workers is designed around that pain. The platform lets you write JavaScript or TypeScript and deploy it to Cloudflareâs global network using Wrangler, Cloudflareâs CLI and development toolkit.[5] In practice, that means a developer can move from local code to globally distributed execution with a much shorter path than with traditional cloud stacks, where Docker images, CI runners, registries, ingress layers, and environment orchestration often sit between âit worksâ and âusers can hit it.â
Cloudflareâs own framing of Workers is broader than âedge functions.â The docs position Workers as a platform for building applications on Cloudflareâs network, not merely a place to attach a small function to a CDN.[2] That language is important because it reflects the platformâs real appeal: operational simplification. It is not just an optimization layer for an existing architecture; for many teams, it is becoming the architecture.
You can see the same sentiment in another post from the conversation:
In the age of AI and coding agents, your tech stack barely matters anymore.
What matters is that it gets out of your way.
I build production pilots for clients, sometimes 2-3 running in parallel. The bottleneck is never writing code anymore.
It's everything around it. Waiting for builds. Debugging CI. Fighting deployment configs at midnight.
I moved everything to Cloudflare Workers and it genuinely changed how I work.
wrangler deploy and it's live. Globally. In seconds.
No Docker, no GitHub Actions, no infra to think about.
When you're shipping updates 10+ times a day across multiple projects, that friction compounds.
Removing it is the real productivity hack, not the AI.
The key phrase there is âwrangler deploy and itâs live. Globally. In seconds.â That sounds like marketing copy until you compare it with what many teams are still doing elsewhere:
- build container images
- push them to a registry
- wait for CI to succeed
- update service definitions
- roll out across environments
- verify ingress and TLS
- debug one environment-specific mismatch
- discover the production path still differs from staging
None of those tasks are unusual. All of them consume time. And almost none of them are the product.
Workers compresses that loop. Wrangler gives developers a standardized interface for local development, configuration, secrets, and deployment.[5] Cloudflare handles the global network, TLS termination, request routing, and scaling fabric behind the scenes.[2] For teams that do not want their differentiation to live in infrastructure, that is the whole point.
This is especially compelling for certain categories of builders:
- Founders shipping MVPs and paid pilots who need speed over bespoke infrastructure
- Agencies and consultancies juggling many client projects simultaneously
- Product teams building globally used APIs or dashboards without wanting a platform team first
- Internal tool builders who want something deployable immediately without ticketing half the company
Cloudflare Workers also benefits from timing. When the product launched in 2018, it was a novel edge compute model with a lot of curiosity but also lots of skepticism.[3] Today, the market context is different. Teams have spent years living through Kubernetes sprawl, overbuilt CI, and framework-specific hosting tradeoffs. The threshold for trying a simpler platform is much lower when the old complexity has already worn people down.
Another post in the conversation captures why the operational simplification story now lands even more strongly than the pure speed story:
what sold me on SkillBoss wasn't the speed. it was the architecture.
everything deploys to Cloudflare Workers. edge-first. no cold starts. automatic SSL. global CDN. D1 for data. R2 for storage. KV for sessions.
this isn't a toy demo tool.
it's production-grade infra that happens to be controlled by natural language. that distinction matters.
That is the right lens. Workers is increasingly being evaluated as production-grade infrastructure with fewer moving parts, not as a clever edge hack.
There is also a subtle but important psychological effect here: when deployment becomes cheap, teams change behavior. They ship smaller changes. They experiment more. They are less afraid of infrastructure-heavy rewrites. The productivity gain is not only fewer minutes spent waiting; it is a faster organizational feedback loop.
So why are teams switching? Because in many organizations, the cost of operating software has overtaken the cost of writing it. Cloudflare Workers offers a credible answer to that problem. Its value proposition is no longer ârun a tiny function at the edge.â It is âship real software without dragging an entire operations stack behind every deploy.â
That is a much bigger market.
How Cloudflare Workers Actually Works Under the Hood
To understand why Workers feels different, you need a concrete mental model of what it actually is.
At the simplest level, Cloudflare Workers runs your code on Cloudflareâs global network, near where incoming requests arrive.[1] Instead of sending every request back to a centralized origin server in one region, Cloudflare can execute logic at or near the network edge. That logic can transform requests, generate responses, render pages, call databases, stream data, authenticate users, or orchestrate other services.
But that description still hides the most important technical distinction: Workers does not primarily rely on spinning up a traditional virtual machine or a full container per request. It relies on V8 isolates.
The isolate model in plain English
A V8 isolate is a lightweight, sandboxed execution environment inside the V8 JavaScript engineâthe same engine that powers Chrome and Node.js.[9] Multiple isolates can run on the same machine while remaining strongly separated from one another. Cloudflare has long argued that this model offers a very different efficiency profile from VM- or container-based serverless systems: lower overhead, higher density, and faster startup behavior.[1][7][8]
If you are used to cloud infrastructure, here is the rough comparison:
- Virtual machine: strong isolation, heavy startup cost, lots of OS overhead
- Container: lighter than a VM, but still a full process environment with filesystem and runtime baggage
- Traditional serverless function: often built on containers or microVMs, better abstraction but still can involve notable cold starts
- V8 isolate: much smaller execution unit inside a shared runtime process, designed to start quickly and consume far less memory
That does not mean isolates are âjust threadsâ or ânot really isolated.â Cloudflare documents a specific security model around Workers and has published substantial material about sandboxing and hardening to make isolate-based multi-tenancy viable at scale.[1][7]
What happens when a request hits a Worker
Cloudflareâs official explanation is straightforward: requests arrive at Cloudflareâs edge, and your Worker can be invoked as part of request processing on the network.[1] Instead of bouncing requests through a user-managed origin before logic executes, Cloudflare can evaluate code directly in the path of the request.
The practical flow looks like this:
- A user makes an HTTP request.
- The request reaches Cloudflareâs network.
- Cloudflare determines whether a Worker should handle the request.
- The Worker code executes in an isolate.
- The Worker may:
- return a response immediately
- fetch from another service
- read or write to platform primitives like KV, D1, or R2
- call another Worker through a binding
- stream a response back incrementally
- The response goes back out through Cloudflareâs edge network to the user.
For developers, the magic is that this feels close to writing a normal web handler. For operators, the magic is that they are not provisioning a machine fleet in every region to make it happen.
Why people say âno cold startsâ
You will constantly hear Workers described as having âno cold starts.â Taken literally, that is too simplistic. All platforms have some notion of startup, placement, or initialization. The more accurate claim is that Workersâ isolate model can make startup overhead much smaller and less visible than in systems that must provision heavier execution environments.
Cloudflare explicitly describes Workers as leveraging isolates that can be created quickly and efficiently, helping reduce the latency penalties associated with cold starts.[1][9] And the platform has evolved here. One of the more interesting details in the X conversation came from Cloudflare co-founder Kenton Varda:
Believe it or not, until recently, Cloudflare Workers would just run your Worker on whatever random machine the HTTP request landed on. Isolates are so cheap, this worked fine. But we now do a little bit of routing within the LAN to make cold starts less frequent.
View on X âThat single post captures a lot. Early on, Workers could often run on whichever machine first received the request, because isolates were cheap enough that broad placement still worked well. More recently, Cloudflare added internal LAN routing to make cold starts less frequent. In other words, the platform is not static; Cloudflare is actively tuning the scheduler and placement behavior to improve real-world startup patterns.
That matters because âcold startâ is not one thing. It is the emergent result of:
- how code is distributed
- where requests land
- how execution environments are reused
- how much state needs initializing
- how aggressively the platform routes requests to warm capacity
Workers benefits from the lightweight nature of isolates, but it also benefits from platform-level scheduling improvements.
Why isolates can be faster to start
Traditional serverless startup often involves substantial bootstrapping: starting a process, loading a runtime, initializing framework code, mounting a filesystem or image, and possibly restoring networking state. Isolates shrink this overhead because they live inside an already running engine process.[8]
That gives Cloudflare a few advantages:
- Higher density: more tenants and executions can share underlying infrastructure efficiently
- Faster startup: less runtime environment to initialize per invocation
- Better global economics: smaller execution units make worldwide placement more feasible
This density is one reason edge deployment is economically plausible. Running code in many locations around the world is much easier when each execution unit is tiny.
Security: how can shared infrastructure be safe?
This is the obvious objection to isolate-based multi-tenancy. If many customersâ code runs on the same machine, why is that okay?
Cloudflareâs answer is that Workers is built around sandboxing and strict process-level security boundaries enforced in and around the V8 engine, with additional hardening layers.[7] The companyâs security model documentation explains how Workers isolates are constrained and how access to system resources is mediated rather than exposed directly like on a normal server.[1][7]
This leads to one of the platformâs defining tradeoffs: you get less raw system access in exchange for a simpler, safer, and more globally scalable execution model. That is why Workers can feel wonderfully frictionless for web workloads and occasionally frustrating for workloads that expect full OS semantics.
Node compatibility: similar enough, but not identical
A major source of confusion is that Workers runs JavaScript and increasingly supports Node.js APIs, but it is not âyou ssh into a server and run Node 24 exactly as-is.â That distinction shows up repeatedly in migration debates.
Cloudflare has expanded Node compatibility substantially over time, which is one reason migrations are accelerating.[2] But the runtime is still Workersâ runtime. It is Web-standards-oriented, isolate-based, and constrained in ways that differ from traditional long-lived Node processes. For many web apps, that difference is now small enough to be manageable. For some packages and runtime assumptions, it still matters a lot.
Why the edge part matters
If you strip away the buzzword, âedgeâ here means executing user-facing logic geographically closer to where requests enter the network. That can improve latency, especially for tasks like:
- request authentication
- redirects and routing
- API mediation
- personalization
- streaming responses
- edge rendering
- cache-aware response generation
But edge computing is not free magic. If your Worker immediately has to call a database sitting in one region or an internal API deep in a centralized VPC, then part of the latency advantage can disappear. That is why the platform story around storage and private connectivity matters so muchâwhich we will get to later.
The right mental model is this: Workers gives you a globally distributed control plane for application logic. It is strongest when the logic itself benefits from being close to users and when your dependencies are designed to complement that placement.
So what is Cloudflare Workers, really?
The best concise definition for practitioners is:
Cloudflare Workers is an isolate-based, globally distributed application runtime that lets you run web logic on Cloudflareâs network without managing servers or containers.
That one sentence explains why it feels different:
- isolate-based explains the runtime and startup characteristics
- globally distributed explains the edge and latency story
- application runtime explains that it is more than simple request rewriting
- without managing servers or containers explains why teams see it as an operational simplifier
Once you understand that, much of the current migration wave makes more sense. Teams are not simply âmoving to the edge.â They are opting into a very different compute modelâone that trades some environment generality for faster deployment, lower operational overhead, and a tighter path from code to global execution.
Why Migrations Are Accelerating From Vercel, Kubernetes, and Node Servers
A year or two ago, many teams looked at Cloudflare Workers with interest but kept production workloads elsewhere. The reasons were understandable: compatibility gaps, platform limits, immature framework support, awkward database stories, and the sense that âedge runtimesâ were still best for tiny handlers rather than whole systems.
That has changed. Not completely, and not for every workload, but enough that migrations have gone from fringe experiments to a recognizable pattern.
You can see it directly in the conversation:
Weâve just migrated our entire infrastructure from Vercel to Cloudflare â In the spirit of transparency, hereâs a breakdown of why we did it, how it went, and what it means moving forward. đ
View on X âAnd again here:
âď¸ đ¨ Why I Chose Cloudflare's Edge-First Stack for My B2B Marketplace New blog's post: (after months of silence đ ) From Next.js + Vercel to Cloudflare Workers: Why I rebuilt my entire infrastructure.
View on X âThose are not toy examples. They reflect a broader shift in how teams evaluate hosting: not âwhich platform matches our current stack most closely?â but âwhich platform lets us simplify the stack while keeping acceptable compatibility?â
The three migration paths showing up most often
In practice, the migration wave clusters around a few repeatable paths.
1. From Vercel and Next.js hosting to Workers
This is probably the highest-profile migration pattern because Vercel has long been the default home for modern frontend teams, especially those deeply invested in Next.js.
Why move?
- lower infrastructure cost expectations
- desire for more integrated backend primitives
- frustration with platform boundaries or pricing
- interest in global edge execution as a default, not an add-on
- willingness to rebuild around Cloudflare-native patterns
Cloudflare has leaned into this by making full-stack development on Workers a first-class story, with frontend, backend, and data living on the same platform.[6] That is important. The migration case is much stronger when Workers is not merely âwhere your frontend functions run,â but where your app can actually live.
2. From Kubernetes-hosted internal or web apps to Workers
This is an underappreciated trend. Kubernetes is incredibly powerful, but many teams are paying a complexity tax for workloads that do not need that much machinery.
The X conversation included a striking example:
a team at @cloudflare just moved an internal nextjs app that used to run in k8s to cloudflare workers + workers vpc for the bits that need to connect to core services
this happened in a few hours! few big takeaways:
1. this is the year of migration & rewrites. it's happening!
2. if you don't know where to start with cloudflare workers because you still have things running in cloud / on prem, workers vpc is a good place to get started!
3. if you tried workers a long time ago, now is a good time to try it again. it has gotten so much richer with support for node js, higher limits, ability to connect to internal services, new products like queues, workflows, pipelines, etc
That post matters because it points to something broader than âedge-native startup appâ momentum. It suggests established teams with existing cloud and internal-service dependencies are using Workers as a serious migration target, even when they are not rewriting everything at once.
The enabling piece here is Workers VPC and related connectivity features: teams can move the user-facing or latency-sensitive layer to Workers while still reaching private or centralized systems that remain elsewhere.[2] This softens the migration cliff. You no longer have to choose between âall on Workersâ and ânot at all.â
3. From Express-style Node servers to Hono on Workers
This is the grassroots migration path: developers with conventional Node HTTP apps looking for a lightweight server framework that maps well onto the Workers model.
That is why this post resonated:
Urgent!!!
Anyone used Hono + Cloudflare workers. Need A green flag for this.
Urgently have to migrate from express to hono serverless.
Hono has become popular in the Workers ecosystem because it gives developers an Express-like or minimalist web framework feel while targeting runtimes like Workers cleanly. The appeal is obvious: keep familiar routing and middleware concepts, but drop the server ownership burden.
Why migrations are easier now than they used to be
The biggest reason migration momentum is real in 2026 is that Workers is no longer the same product many people tried a few years ago.
Cloudflareâs docs and platform updates show a much broader, richer runtime and platform story than the early âsmall JavaScript at the edgeâ era.[2] The differences that matter most for migration are:
- Improved Node.js compatibility
- Higher limits and broader workload support
- Better framework support
- Integrated storage and data products
- Private connectivity options like Workers VPC
- More surrounding primitives such as Queues and Workflows
Put simply: earlier versions of Workers often asked teams to contort their apps into the platform. Today, the platform bends much more toward real application requirements.
The companyâs own migration narratives reinforce this. Cloudflare has published examples of organizations reducing complexity and cost by moving workloads onto its platform, including significant cloud cost reductions in some cases.[12] Those stories should always be read criticallyâvendor case studies are marketingâbut they align with the practical logic teams are discussing publicly: fewer moving parts, less egress, and more integrated infrastructure.
The real motivators behind migration
When practitioners describe these moves, four motivators show up repeatedly.
Cost
Workers can be attractive when compared with stacks that accumulate costs across multiple layers:
- frontend hosting
- serverless functions
- object storage
- cache/CDN
- observability vendors
- cross-cloud egress
- managed databases
- Kubernetes operations
Consolidation does not always mean cheaper, but it often means more legible cost structure. And for some teams, especially those building globally distributed apps, that matters as much as the absolute bill.
Simpler operations
This is the strongest motivator. Teams are tired of infrastructure assembly work. If Workers lets them remove CI complexity, container pipelines, reverse proxies, or regional fleet management, that operational simplicity can outweigh some runtime compromises.
Better global delivery
Workers turns global rollout from a special project into the default platform behavior.[2] If your app serves users across geographies, that is not a minor feature. It changes architecture decisions upstream.
Stack consolidation
Cloudflareâs strategy is increasingly persuasive when adopted as a system: compute, storage, cache, static assets, internal service communication, and edge networking under one roof.[6] Even teams that would not pick Workers as a standalone function runtime may pick it as part of an integrated stack.
Why this is more than hobbyist enthusiasm
A good litmus test for hype is whether people are migrating toward the platform from already-working setups. That is what we are seeing. Not universally, and not without tradeoffs, but enough to call it a real trend.
The strongest evidence is not that people say Workers is cool. It is that they are moving off systems that were already viableâVercel, Kubernetes, Express serversâand deciding the rewrite or migration cost is worth paying.
That only happens when the target platform has crossed a credibility threshold.
Workers appears to have crossed it.
Why Workers Is More Than Functions: The Platform Story Teams Are Buying Into
If you still think of Cloudflare Workers as âsmall edge functions,â you are missing the reason many teams are adopting it seriously.
What developers are buying into is not a single compute primitive. It is an integrated platform where compute, data, communication, and connectivity increasingly fit together without forcing users to stitch together five vendors and three network boundaries.
That is why this post landed so well:
I built a self-hosted Sentry clone that runs entirely on Cloudflare Workers, and I think it showcases one of the most underrated features in the Cloudflare ecosystem: Service Bindings. Let me explain why this matters. When you have multiple Cloudflare Workers (an API, a webhook handler, a cron job), they all need common things: error tracking, authentication, rate limiting, metrics. The typical solution? External HTTP calls to third-party services. That means: - 50-200ms latency per call - Egress fees - Your data leaving your infrastructure - Another vendor to manage Service bindings let Workers call each other directly inside Cloudflare's network. No HTTP. No internet. Just internal RPC with <5ms latency. With Workers Sentinel, any Worker in my account can just point Sentry-SDK into the Service binding, and have all errors flow into one centralized dashboard, stored in Durable Objects with SQLite. No external calls. No added latency. Service bindings aren't just for error tracking. You can centralize: đ Authentication â One Worker that validates tokens for all your services đ Metrics â Centralized collection without external observability costs đŚ Rate Limiting â Shared counters that actually work across Workers đŠ Feature Flags â Instant propagation, no deployment needed Think of it as building your own internal microservices mesh, but at the edge, with zero network overhead. Workers Sentinel uses two Durable Objects: - AuthState (singleton) â users, sessions, projects - ProjectState (per-project) â issues, events, stats Events are fingerprinted and grouped intelligently. The dashboard is a Vue.js app served from the same Worker. I could say i built this to learn Durable Objects or that I needed error tracking for side projects, but honestly I just need a way to show my wife why I'm sending $200/month to some guy named Claudio who apparently helps me write code. The whole thing is open source. Deploy it to your Cloudflare account, point your Sentry SDKs at it, and you're done. But more importantly: take a closer look at service bindings. They're the glue that turns a collection of Workers into an actual platform. Most Cloudflare customers I talk to aren't using them, and they're missing out. To the Sentry team: I love your work. Genuinely. Sentry is battle-tested, has incredible features, and is what you should use for anything that matters. This project is a toy. A learning exercise. A weekend hack that got slightly out of hand. Please do not trust your production errors to this dummy clone. If your startup goes down at 3 AM because Workers Sentinel missed an edge case, that's on you. I warned you. Use the real thing. But if you want to learn about Durable Objects, service bindings, and how error tracking works under the hood? Clone away. Your Workers shouldn't be islands. Connect them.
View on X âThe key concept there is Service Bindings.
Service Bindings: internal communication without the usual overhead
Service Bindings allow one Worker to call another through Cloudflareâs platform-native mechanism rather than treating every internal interaction like an external HTTP call.[6] For practitioners, that matters for three reasons:
- Latency: internal calls avoid some of the normal overhead of public HTTP networking
- Cost: less egress and less dependence on external vendors
- Architecture: you can decompose an application into platform-native services without creating a mess of public endpoints
In conventional cloud setups, internal service communication often drags in a lot of infrastructure:
- service meshes
- private networking
- API gateways
- auth layers
- observability glue
- egress accounting
Workers gives teams a lighter-weight internal composition model. For edge-native applications, that can be a major simplifier.
D1, KV, and R2: the âfull-stack on Workersâ pitch
Cloudflareâs full-stack Workers story depends on data products, not just code execution.[6]
Here is the rough division of labor:
- KV: key-value storage for globally distributed reads, useful for config, sessions, flags, and caches
- R2: object storage, positioned in part around eliminating egress fees in common patterns
- D1: SQLite-based relational database offering for applications that want SQL in the Workers ecosystem
None of these should be treated as universal replacements for every database or storage product. But together they let teams keep far more of the application inside one environment.
That is what people mean when they say Workers feels production-ready now. Not that every feature is perfect, but that you can plausibly build an application with:
- frontend delivery
- backend APIs
- auth logic
- media or file storage
- lightweight relational data
- globally distributed config/state
- internal service composition
âall without immediately exiting the platform.
This post captures the interest from developers thinking exactly in those terms:
Choosing @DrizzleOrm over Prisma enables a few interesting things including:
- Host on Cloudflare Workers
- Put a DB cache at the Edge like @PolyScaleAi in front of the DB
Canât wait to attempt a migration on my hobby project
The Drizzle-versus-Prisma mention is telling. Runtime choice increasingly affects ORM choice, package choice, and architecture choice. Workers is not just a host; it shapes the software stack around it.
Workers VPC: the bridge for real enterprises
A platform feels complete not when greenfield apps love it, but when brownfield teams can adopt it incrementally.
That is where Workers VPC matters. It helps teams connect Workers-based applications to private or core services that still live in cloud VPCs or on-prem-style environments, making phased migration much more realistic.[2] Instead of demanding that every dependency become edge-native on day one, Cloudflare lets Workers operate as a front door and orchestration layer for systems that remain partly centralized.
This is strategically significant. It changes Workers from an all-or-nothing proposition into a migration layer.
For example, a team might:
- move edge rendering and request handling to Workers
- keep a legacy Postgres cluster in a private environment
- route internal API calls through Workers VPC
- progressively replace centralized dependencies with D1, KV, R2, or other platform services later
That lowers migration risk dramatically.
Why the platform story matters more than raw runtime specs
Developers do not adopt compute in isolation. They adopt systems that reduce the number of boundaries they have to manage.
A fast runtime with weak storage and awkward networking is not enough. A globally distributed handler with no good internal composition model is not enough. A nice edge story that collapses the moment you need private connectivity is not enough.
Workers is increasingly compelling because the surrounding platform fills those gaps.
And that, more than benchmark arguments alone, is why the conversation has shifted. The market is no longer asking, âCan Cloudflare run code?â It is asking, âCan Cloudflare replace enough of my stack to make my life simpler?â
For a growing number of teams, the answer is yes.
Performance Reality Check: Where Workers Shines, Where It Has Been Criticized, and What Changed
No topic in the Workers conversation generates more heat than performance. And to understand the platform honestly, you have to hold two ideas in your head at once:
- Workers can deliver excellent real-world performance for many edge-oriented workloads.
- Workers has also faced legitimate criticism around CPU performance, consistency, and runtime behavior in some scenarios.
Those statements are not contradictory.
The criticism was real
The sharpest public critique in the conversation came from Vercel CEO Guillermo Rauch:
Vercel Fluid vs Cloudflare Workers.
đŹ"From my findings, Vercel is 1.2x to 5x faster than CloudFlare for server rendering."
We gave a very, very earnest try to Workers when we explored the edge runtime / world. There's no "beef", we had to migrate off for technical reasons.
To be fair to them, they brought new ideas to the market. The CPU-based pricing for instance was good and Vercel Fluid has it as well.
The main issues we ran into:
1ď¸âŁ Bad CPU performance, low capacity, very irregular and spiky latency. The benchmarks show you this.
2ď¸âŁ Single-vendor runtime. You can't run "Node.js 24". You run "whatever they give you, which is trying to look like Node.js but it's not"
3ď¸âŁ Really bad connectivity to the clouds. We measured the roundtrip time between AWS and CF Workers as being in the low 10s to 100s of milliseconds
The result of us migrating off was shipping Fluid. You pay for CPU, it handles concurrency like a server (cost-efficient), you control the size / memory of the functions, you get full, open runtimes like @nodejs and Python, you get 1ms latency to cloud servicesâŚ
Most people today are using Fluid and they don't even notice, because it just worksÂŽ with the entire ecosystem.
Here are the benchmarks @theo ran:
You do not have to agree with every implication there to see why the post resonated. It names the three most common concerns advanced teams raise when evaluating Workers against Node-based or container-based alternatives:
- CPU performance and capacity
- runtime compatibility
- connectivity to centralized cloud services
Those are not superficial objections. They go directly to whether Workers can replace an existing production environment or only complement it.
Historically, some teams did experience Workers as irregular under CPU pressure or less predictable for heavy server rendering. Others found the runtime close to Node, but not close enough for their dependency graph or operational assumptions. And for apps still deeply tied to AWS-hosted services or regional databases, edge execution could expose network path penalties rather than eliminate them.
But Cloudflareâs fixes also appear real
What changedâand why the current discussion feels differentâis that Cloudflare publicly investigated benchmark gaps and published a detailed explanation of what it found and fixed.[1] The company did not just wave away complaints; it identified concrete causes, including request scheduling behavior, outdated V8 garbage collector settings, excess buffer copying, stream configuration problems, and even an upstream V8 optimization opportunity.[1]
That technical specificity matters because it suggests the gap was not reducible to âisolates are fundamentally slower.â Some of it came from platform implementation details, queueing behavior, benchmark configuration mismatches, and runtime tuning decisions that could actually be corrected.
The practitioner summary making the rounds captured that pretty well:
saw lots of people mentioning that their workers suddenly got faster after @theo's video, looks like this blog explains it. how cloudflare closed a 3.5x performance gap, tl;dr: - fixed request scheduling during cpu bursts (biggest impact - wasn't even cpu speed, just bad queueing) - updated v8 garbage collector from 2017 settings (+25% boost - 8 year old config still in prod is kinda crazy) - removed unnecessary buffer copies in opennext (50 x 2kb buffers per request - classic death by a thousand cuts lmao) - switched to byte streams with proper highwatermark (4096 - interesting to know that defaults matter way more what i used to think) - patched v8 json.parse with reviver (+33%, upstreamed to chrome 143 - fixing v8 itself is pretty insane ngl) - fixed missing node_env=production in react ssr benchmark (dev mode in prod benchmarks... oops đđ) - enabled force-dynamic in next.js config for proper streaming (config mismatches pretty much suck to debug) - fixed node.js slow trig functions (3x faster, benefits everyone - they literally fixed their competitor's platform lmfao) now performs on par with @vercel on pretty much all benchmarks except next.js (gap significantly closed, work ongoing!!!!!)
View on X âCloudflare itself also amplified the benchmark investigation and fixes:
Cloudflare investigated CPU performance benchmark results for Workers, uncovering and fixing issues, making Cloudflare Workers faster for all customers.
https://blog.cloudflare.com/unpacking-cloudflare-workers-cpu-performance-benchmarks/?utm_campaign=cf_blog&utm_content=20251014&utm_medium=organic_social&utm_source=twitter/
If you strip away social media dunking, the takeaway is nuanced but important:
- Some past performance criticisms were valid.
- A meaningful share of the gap came from fixable platform issues rather than unavoidable architectural limits.
- Cloudflare seems to have made real progress closing those gaps.
That does not mean all concerns are gone. It means old assumptions deserve re-testing.
Edge latency and CPU throughput are different questions
A common mistake in platform evaluation is blending two separate performance dimensions into one.
1. Network proximity and user-perceived latency
Workers is very strong here. If logic executes near where requests enter the network, users can get faster first-byte times and lower request overhead for edge-suitable operations.[2] This matters for auth, redirects, personalization, rendering at the edge, and streaming.
2. Raw compute performance and sustained heavy workloads
This is where the picture gets more conditional. Workers can do meaningful computation, but isolate-based edge runtimes are not automatically the best answer for CPU-heavy jobs, long-lived processes, or workloads that need extensive memory and system control. The architecture is optimized for fast, dense, globally distributed executionânot for every class of server task.
That distinction is the key to sane evaluation. A platform can be the fastest way to serve globally distributed user requests and still be the wrong place to do your most compute-intensive backend processing.
Why queueing and scheduling matter more than people think
One of the most interesting lessons from Cloudflareâs benchmark post is that perceived CPU slowness was partly a scheduling problem.[1] In distributed systems, users often experience âperformanceâ as a combination of:
- waiting time before execution
- fairness under concurrency
- garbage collection pauses
- buffer handling
- framework integration overhead
In other words, a platform can look slow because requests are waiting badly, not because the processor itself is weak.
That is important for practitioners because it changes what to measure. If you are comparing Workers with a regional Node server or a container platform, you should not only ask:
- How fast is the raw code path?
You should also ask:
- What happens under bursty concurrency?
- How stable is p95 and p99 latency?
- How much overhead is coming from framework adapters?
- Are responses streaming correctly?
- Is remote data access dominating the budget anyway?
These are operational questions, not just benchmark questions.
Streaming is one of Workersâ clearest performance wins
Where Workers tends to shine most clearly is in workloads where responsiveness matters more than monolithic completion timeâespecially streaming and incremental delivery.
If your app can start sending useful bytes early, edge placement and proper streaming support can produce a much better user experience even when total server work is nontrivial. This is especially relevant for:
- AI responses
- server-side rendering with chunked output
- event streams
- progressive API responses
- webhook acknowledgment patterns
That is why framework and platform support for byte streams, buffering behavior, and watermarks became part of the recent performance conversation.[1] It is not implementation trivia. It directly affects perceived speed.
The cloud-connectivity objection is still real
Rauchâs third pointâconnectivity to the cloudsâremains one of the more serious decision factors. If your Worker sits at the edge but must call an AWS service in a region on every request, the network path can dominate your latency budget. In some architectures, this largely cancels out the edge advantage.
This is why the best Workers architectures usually do one of three things:
- keep enough logic and data on-platform to avoid centralized round trips
- use private connectivity features to reduce integration friction
- reserve Workers for request handling, auth, caching, or orchestration while leaving centralized heavy lifting elsewhere
If you ignore this, you can end up with an edge-shaped architecture that still behaves like a cross-cloud hairpin.
What experts should benchmark now
If your team is evaluating Workers seriously in 2026, benchmark current reality, not forum memories from 2022 or 2023.
Specifically test:
- cold and warm request latency
- p95/p99 under burst traffic
- streaming start time
- SSR behavior for your actual framework
- CPU-heavy routes
- memory-sensitive routes
- connectivity to your real databases and internal services
- cost under realistic concurrency
Do not rely on toy hello-world tests, and do not rely on vendor-neutrality rhetoric from any side. Workers, Vercel Fluid, Node servers, Fly.io, and container platforms each optimize for different things.
The honest verdict on performance
Here is the clearest way to frame it:
- Workers is strongest when edge locality, fast startup, and streaming matter.
- Workers is less convincing when raw CPU throughput, perfect Node parity, or deep centralized cloud integration dominate.
- Cloudflare has materially improved platform performance and deserves credit for publicly fixing specific issues.
- Past criticism should not be dismissedâbut neither should it be frozen into permanent truth.
That is the real state of play. The performance debate is no longer âWorkers is amazingâ versus âWorkers is slow.â The serious question is whether Workersâ performance profile matches your workload profile.
For a growing set of applications, it does.
Best-Fit Workloads: Where Cloudflare Workers Delivers the Most Value
The easiest way to understand Workers is to stop asking whether it can do everything and start asking where it creates outsized leverage.
The platformâs sweet spot is not âall backend computing.â It is workloads that benefit from three properties simultaneously:
- global distribution
- fast request handling and startup
- reduced operational overhead
When those line up, Workers is unusually compelling.
Streaming responses and incremental delivery
One of the strongest fits is streaming.
Workers supports streaming responses directly, which matters for modern applications that do not want to hold the entire response in memory or wait for all work to finish before sending anything to the client.[2] This is especially important for AI interfaces, server rendering, and real-time-feeling applications.
The X conversation captured the operational significance well:
The catch: infrastructure needs to support chunked responses.
Most Node servers handle this fine.
But some proxies, CDNs, and serverless platforms buffer everything anyway.
AWS Lambda needs response streaming mode enabled.
Cloudflare Workers and Vercel Edge support it out of the box.
That âout of the boxâ support is not a small detail. Streaming is one of those capabilities that sounds standard until you encounter a platform or proxy that buffers everything and destroys the user experience. Workersâ edge location plus streaming support makes it a natural fit for applications where time-to-first-byte matters more than just total completion time.
Edge rendering and latency-sensitive web apps
Workers is also a strong choice for edge renderingâserving or rendering content closer to users rather than centralizing all web response generation in one region.
That is why this blunt recommendation resonates:
They should move to cloudflare workers for edge rendering
View on X âNot every app needs edge rendering. But when users are globally distributed and page generation, personalization, auth, or route logic sits in the request path, moving that work closer to the user can improve responsiveness meaningfully.
APIs, middleware, auth, rate limiting, and webhooks
Workers is arguably at its most intuitive when used for request-path logic:
- API gateways
- BFFs (backend-for-frontend layers)
- auth validation
- token handling
- bot checks
- rate limiting
- webhook ingestion
- request transformation and routing
These patterns fit the model well because they are:
- request-driven
- often stateless or lightly stateful
- sensitive to latency
- operationally annoying to run on full server stacks
This is where the platform can replace a lot of âsmall but annoyingâ infrastructure with a simpler deployment model.
Lightweight integrations and automation
Another theme emerging in the conversation is that Workers lowers the barrier to building custom integrations quicklyâeven for users who are not traditional backend engineers.
A year ago this non-technical user might of churned unless we built the integration they were after. Now they are slinging together Cloudflare Workers, coding, and bringing things live. Much to think about on this.
View on X âThat post points to something larger than no-code enthusiasm. Workers sits at an interesting intersection: simple enough to deploy quickly, powerful enough to connect services, and globally available by default. That makes it attractive for:
- partner integrations
- glue code between SaaS systems
- event-driven automation
- internal APIs
- custom webhook processors
- one-off customer-specific workflows
In older infrastructure models, these jobs often end up overbuilt because the platform choice itself drags in CI, hosting, observability, and deployment complexity. Workers changes the economics of small software.
Entire products, not just request rewrites
The conversation also makes clear that people are building whole products on Workers, not just CDN handlers or experimental edge scripts.
That includes:
- SaaS apps
- internal dashboards
- monitoring tools
- customer-facing APIs
- full-stack web applications
Cloudflareâs own full-stack positioning reflects this shift.[6] When paired with D1, KV, R2, Durable Objects, and Service Bindings, Workers can support architectures that would previously have required a more traditional backend stack.
A practical shortlist of best-fit use cases
If you want the simple checklist, Workers is often a very good fit for:
- Globally distributed web apps
- Streaming AI or chat interfaces
- Edge-rendered content
- Auth and session middleware
- Webhook ingestion and transformation
- Rate limiting and abuse prevention
- API composition and BFF layers
- Internal tools that need easy deployment
- SaaS integrations and automation
- Products that benefit from collapsing frontend, backend, and storage into one platform
The throughline is simple: Workers delivers the most value when network position and operational simplicity matter as much as raw compute.
Trade-Offs, Limits, and When Workers Is the Wrong Tool
The strongest case for Workers is also the source of its main limitations.
Because Workers is an isolate-based, platform-constrained runtime, it cannot be everything a full Node environment or container platform is. And Cloudflareâs own roadmap now makes that explicit.
Compatibility is goodâjust not universal
Workers has gotten much better at supporting Node-style applications and packages, but it is still not identical to a normal server or unrestricted Node runtime.[2][9] If your application depends on:
- native binaries
- unusual process behavior
- broad filesystem assumptions
- low-level OS access
- packages tightly coupled to Node internals
âyou may hit friction.
This does not mean Workers is immature. It means the platform has a specific execution model and security boundary. The more your app expects traditional server semantics, the less elegant the fit becomes.
Some workloads simply want containers
Longer-running jobs, high-memory processes, browser automation at scale, specialized runtimes, and compute-heavy tasks can outgrow the Worker model. Cloudflareâs introduction of Containers is revealing hereânot as a contradiction, but as an admission that isolates have boundaries.
Big day for @Cloudflare as we launch our newest compute primitive, Containers! A bit of history: In 2020, we acquired S2 Remote Browser tech and faced the challenge of migrating it from AWS to our edge. To run a Chromium-based browser securely, we split our team: half focused on the Remote Browser app, half built a robust, independent container platform to support it. We knew some workloads needed to be close to users but didnât fit our Workers isolate model. This platform became a game-changer, empowering dozens of internal teams to build features like Workers CI/CD, Browser Rendering, Key Transparency, Workers AI, and more. But we kept asking: Is this the right primitive for our users? Workers remains the go-to for globally distributed, effortlessly scalable compute at a great price. Initially, many use cases we heard were for single-node webservers that didnât need region earth. Then we got excited as users started asking for latency-sensitive, real-time applications and the ability to run agents close to the users they serve. Cloudflare Containers are here to deliver for those high-performance, user-proximal workloads. Excited to see what you build with it!
View on X âThat post is unusually candid. It says, in effect: Workers remains the default for globally distributed scalable compute, but some user-proximal workloads do not fit the isolate model. That is exactly right.
Containers exist because some applications need:
- more runtime control
- longer process lifetimes
- stronger environment parity with standard server software
- support for workloads that are awkward in isolate constraints
When not to use Workers
Workers is usually the wrong primary runtime if your core workload is:
- CPU-heavy and sustained, not just bursty
- memory-intensive
- dependent on full Node or other unrestricted runtimes
- tied deeply to centralized cloud services on every request
- better modeled as a long-lived server process
- dependent on custom binaries or system packages
It can still play a role in front of these systemsâfor auth, caching, request routing, or edge mediationâbut it may not be the right place to run the workload itself.
The practical takeaway
Do not treat Workers as a religion. Treat it as a very strong platform with a distinct shape.
Use it where its shape matches the problem:
- edge-first request handling
- globally distributed APIs
- streaming
- integrated platform development
- fast deployment loops
Do not force it onto workloads that are really asking for container semantics.
Cloudflare seems to understand this better now than many enthusiasts do, which is a healthy sign for the platform.
Where the Platform Is Heading and How to Decide If Your Team Should Switch
The roadmap signals are pretty clear: Workers is becoming Cloudflareâs center of gravity.
That is visible both in product development and in how practitioners talk about the platform. The ecosystem is consolidating around Workers as the default application model, not one option among many.
Although to be fair, Cloudflare made it clear they're moving everything over to Workers. Pages is EOL.
View on X âWhether âPages is EOLâ is interpreted narrowly or broadly, the direction is hard to miss: Cloudflare wants developers building on Workers primitives.
What that means strategically
It means past objections may have shorter shelf lives than they used to. The platform is evolving fast, and Cloudflare is clearly investing in:
- runtime improvements
- integrated full-stack workflows
- private connectivity
- better framework support
- alternative compute options like Containers for non-isolate workloads
That combination makes Workers more adoptable, not less. It no longer asks teams to bet on a narrow edge-function niche. It asks them to consider Cloudflare as an application platform.
Who should switch now?
A simple decision matrix helps.
Strong candidate to switch now
- startups building greenfield web apps
- teams shipping globally distributed APIs
- products that benefit from streaming or edge rendering
- internal tools currently overpaying the Kubernetes complexity tax
- teams that want to collapse frontend, backend, and storage into one platform
Pilot first
- existing Node backends with moderate dependency complexity
- Next.js teams with custom SSR patterns
- companies with centralized databases but interest in edge request handling
- organizations wanting incremental migration through Workers VPC
Probably stay hybrid or use containers
- compute-heavy backends
- workloads dependent on native binaries
- apps needing broad OS/runtime control
- organizations deeply tied to regional cloud services with strict latency budgets to those services
How to evaluate safely
Do not start with your hardest workload. Start with a slice that reveals the platformâs strengths and your integration risks:
- a webhook service
- an auth or API gateway layer
- a streaming endpoint
- an internal app with modest backend complexity
- a globally distributed read-heavy API
Measure:
- deployment speed
- operational burden
- p95 latency
- dependency compatibility
- developer experience
- cost under real traffic
The teams getting the most from Workers are not necessarily the ones rewriting everything immediately. They are the ones choosing workloads where Workers changes the economics of shipping.
And that is the best way to think about the platform in 2026: not as a universal replacement for all compute, but as one of the clearest answers to a modern engineering problemâ
how to ship globally distributed software without inheriting infrastructure drag as your real product.
Sources
[1] How Workers works â Cloudflare Developers
[2] Overview ¡ Cloudflare Workers docs
[4] The Ultimate Guide to Cloudflare Workers | by Caleb Rocca
[5] cloudflare/workers-sdk: Home to Wrangler, the CLI for Cloudflare Workers â GitHub
[6] Your frontend, backend, and database â now in one Cloudflare Worker â Cloudflare Blog
[7] Security model - Workers - Cloudflare Docs
[8] Safe in the sandbox: security hardening for Cloudflare Workers â Cloudflare Blog
[9] Fine-Grained Sandboxing with V8 Isolates - InfoQ
[10] How Cloudflare Workers Leverage V8 Isolates for Efficient Serverless Computing
[11] Cloud Computing Beyond Containers: How Cloudflare's Isolates Are Changing the Game
[12] 80% lower cloud costs: How Baselime moved from AWS to Cloudflare â Cloudflare Blog
[14] My Cloudflare Workers Migration: The Good, the Bad, and the Confusing
[15] The Migration of Legacy Applications to Workers â Cloudflare Blog
Further Reading
- [The Complete Developer's Guide to Cloudflare Workers in 2025: Features, Patterns, Limits, and Real-World Use Cases](/buyers-guide/the-complete-developers-guide-to-cloudflare-workers-in-2025-features-patterns-limits-and-real-world-) â An in-depth look at A complete guide to Cloudflare Workers for developers in 2025
- [What Is OpenClaw? A Complete Guide for 2026](/buyers-guide/what-is-openclaw-a-complete-guide-for-2026) â OpenClaw setup with Docker made safer for beginners: learn secure installation, secrets handling, network isolation, and daily-use guardrails. Learn
- [PlanetScale vs Webflow: Which Is Best for SEO and Content Strategy in 2026?](/buyers-guide/planetscale-vs-webflow-which-is-best-for-seo-and-content-strategy-in-2026) â PlanetScale vs Webflow for SEO and content strategy: compare performance, CMS workflows, AI search readiness, pricing, and best-fit use cases. Learn
- [Adobe Express vs Ahrefs: Which Is Best for Customer Support Automation in 2026?](/buyers-guide/adobe-express-vs-ahrefs-which-is-best-for-customer-support-automation-in-2026) â Adobe Express vs Ahrefs for customer support automation: compare fit, integrations, pricing, and limits to choose the right stack. Learn
- [Cohere vs Anthropic vs Together AI: Which Is Best for SEO and Content Strategy in 2026?](/buyers-guide/cohere-vs-anthropic-vs-together-ai-which-is-best-for-seo-and-content-strategy-in-2026) â Cohere vs Anthropic vs Together AI for SEO and content strategyâcompare workflows, pricing, scale, and fit for teams. Find out
References (15 sources)
- How Workers works - developers.cloudflare.com
- Overview ¡ Cloudflare Workers docs - developers.cloudflare.com
- Cloudflare is giving developers programmable access to the network edge with new service - techcrunch.com
- The Ultimate Guide to Cloudflare Workers | by Caleb Rocca - medium.com
- cloudflare/workers-sdk: â ď¸ Home to Wrangler, the CLI for Cloudflare Workers - github.com
- Your frontend, backend, and database â now in one Cloudflare Worker - blog.cloudflare.com
- Security model - Workers - Cloudflare Docs - developers.cloudflare.com
- Safe in the sandbox: security hardening for Cloudflare Workers - blog.cloudflare.com
- Fine-Grained Sandboxing with V8 Isolates - InfoQ - infoq.com
- How Cloudflare Workers Leverage V8 Isolates for Efficient Serverless Computing - medium.com
- Cloud Computing Beyond Containers: How Cloudflare's Isolates Are Changing the Game - dev.to
- 80% lower cloud costs: How Baselime moved from AWS to Cloudflare - blog.cloudflare.com
- Cloudflare Workers scale too well and broke our infrastructure, so we switched to Cloudflare - blog.cloudflare.com
- My Cloudflare Workers Migration: The Good, the Bad, and the Confusing - blog.arpitdalal.dev
- The Migration of Legacy Applications to Workers - blog.cloudflare.com