deep-dive

What Is Cloudflare Workers? A Complete Guide for 2026

Cloudflare Workers explained: how it runs at the edge, why teams are migrating, and where it fits best in modern stacks. Learn

👤 Ian Sherk 📅 March 13, 2026 ⏱️ 42 min read
AdTools Monster Mascot reviewing products: What Is Cloudflare Workers? A Complete Guide for 2026

Why Teams Are Switching: Infrastructure Friction Is the New Bottleneck

For a long time, developer infrastructure was sold primarily on technical superiority: lower latency, better scaling, more control, more flexibility. Those things still matter. But if you read what practitioners are actually saying now, the emotional center of the conversation has shifted.

The complaint is no longer, “my code is hard to write.” It is, “everything around the code is slowing me down.”

That distinction matters because it explains a lot of Cloudflare Workers’ current momentum. Workers is not winning attention only because it runs code close to users. It is winning because it promises to remove a whole category of operational drag: container setup, CI pipeline wrangling, custom deployment glue, region management, TLS configuration, fleet scaling, and the endless paper cuts that make simple changes feel expensive.

Jannik Jung @JannikJung Tue, 10 Mar 2026 11:01:07 GMT

In the age of AI and coding agents, your tech stack barely matters anymore. What matters is that it gets out of your way. I build production pilots for clients, sometimes 2-3 running in parallel. The bottleneck is never writing code anymore. It's everything around it. Waiting for builds. Debugging CI. Fighting deployment configs at midnight. I moved everything to Cloudflare Workers and it genuinely changed how I work. wrangler deploy and it's live. Globally. In seconds. No Docker, no GitHub Actions, no infra to think about. When you're shipping updates 10+ times a day across multiple projects, that friction compounds. Removing it is the real productivity hack, not the AI.

View on X →

This is a familiar feeling for teams shipping modern apps in 2026. AI-assisted coding has accelerated the production of code itself. Small teams can generate prototypes, integrations, and internal tools far faster than they could even two years ago. The new bottleneck is delivery: getting that code packaged, deployed, reachable, secure, and globally available without inventing more infrastructure than the product deserves.

Cloudflare Workers is designed around that pain. The platform lets you write JavaScript or TypeScript and deploy it to Cloudflare’s global network using Wrangler, Cloudflare’s CLI and development toolkit.[5] In practice, that means a developer can move from local code to globally distributed execution with a much shorter path than with traditional cloud stacks, where Docker images, CI runners, registries, ingress layers, and environment orchestration often sit between “it works” and “users can hit it.”

Cloudflare’s own framing of Workers is broader than “edge functions.” The docs position Workers as a platform for building applications on Cloudflare’s network, not merely a place to attach a small function to a CDN.[2] That language is important because it reflects the platform’s real appeal: operational simplification. It is not just an optimization layer for an existing architecture; for many teams, it is becoming the architecture.

You can see the same sentiment in another post from the conversation:

Jannik Jung @JannikJung Tue, 10 Mar 2026 11:01:07 GMT

In the age of AI and coding agents, your tech stack barely matters anymore.

What matters is that it gets out of your way.

I build production pilots for clients, sometimes 2-3 running in parallel. The bottleneck is never writing code anymore.
It's everything around it. Waiting for builds. Debugging CI. Fighting deployment configs at midnight.

I moved everything to Cloudflare Workers and it genuinely changed how I work.
wrangler deploy and it's live. Globally. In seconds.
No Docker, no GitHub Actions, no infra to think about.

When you're shipping updates 10+ times a day across multiple projects, that friction compounds.

Removing it is the real productivity hack, not the AI.

View on X →

The key phrase there is “wrangler deploy and it’s live. Globally. In seconds.” That sounds like marketing copy until you compare it with what many teams are still doing elsewhere:

None of those tasks are unusual. All of them consume time. And almost none of them are the product.

Workers compresses that loop. Wrangler gives developers a standardized interface for local development, configuration, secrets, and deployment.[5] Cloudflare handles the global network, TLS termination, request routing, and scaling fabric behind the scenes.[2] For teams that do not want their differentiation to live in infrastructure, that is the whole point.

This is especially compelling for certain categories of builders:

Cloudflare Workers also benefits from timing. When the product launched in 2018, it was a novel edge compute model with a lot of curiosity but also lots of skepticism.[3] Today, the market context is different. Teams have spent years living through Kubernetes sprawl, overbuilt CI, and framework-specific hosting tradeoffs. The threshold for trying a simpler platform is much lower when the old complexity has already worn people down.

Another post in the conversation captures why the operational simplification story now lands even more strongly than the pure speed story:

Jaynit Makwana @JaynitMakwana 2026-03-12T13:40:26Z

what sold me on SkillBoss wasn't the speed. it was the architecture.

everything deploys to Cloudflare Workers. edge-first. no cold starts. automatic SSL. global CDN. D1 for data. R2 for storage. KV for sessions.

this isn't a toy demo tool.

it's production-grade infra that happens to be controlled by natural language. that distinction matters.

View on X →

That is the right lens. Workers is increasingly being evaluated as production-grade infrastructure with fewer moving parts, not as a clever edge hack.

There is also a subtle but important psychological effect here: when deployment becomes cheap, teams change behavior. They ship smaller changes. They experiment more. They are less afraid of infrastructure-heavy rewrites. The productivity gain is not only fewer minutes spent waiting; it is a faster organizational feedback loop.

So why are teams switching? Because in many organizations, the cost of operating software has overtaken the cost of writing it. Cloudflare Workers offers a credible answer to that problem. Its value proposition is no longer “run a tiny function at the edge.” It is “ship real software without dragging an entire operations stack behind every deploy.”

That is a much bigger market.

How Cloudflare Workers Actually Works Under the Hood

To understand why Workers feels different, you need a concrete mental model of what it actually is.

At the simplest level, Cloudflare Workers runs your code on Cloudflare’s global network, near where incoming requests arrive.[1] Instead of sending every request back to a centralized origin server in one region, Cloudflare can execute logic at or near the network edge. That logic can transform requests, generate responses, render pages, call databases, stream data, authenticate users, or orchestrate other services.

But that description still hides the most important technical distinction: Workers does not primarily rely on spinning up a traditional virtual machine or a full container per request. It relies on V8 isolates.

The isolate model in plain English

A V8 isolate is a lightweight, sandboxed execution environment inside the V8 JavaScript engine—the same engine that powers Chrome and Node.js.[9] Multiple isolates can run on the same machine while remaining strongly separated from one another. Cloudflare has long argued that this model offers a very different efficiency profile from VM- or container-based serverless systems: lower overhead, higher density, and faster startup behavior.[1][7][8]

If you are used to cloud infrastructure, here is the rough comparison:

That does not mean isolates are “just threads” or “not really isolated.” Cloudflare documents a specific security model around Workers and has published substantial material about sandboxing and hardening to make isolate-based multi-tenancy viable at scale.[1][7]

What happens when a request hits a Worker

Cloudflare’s official explanation is straightforward: requests arrive at Cloudflare’s edge, and your Worker can be invoked as part of request processing on the network.[1] Instead of bouncing requests through a user-managed origin before logic executes, Cloudflare can evaluate code directly in the path of the request.

The practical flow looks like this:

  1. A user makes an HTTP request.
  2. The request reaches Cloudflare’s network.
  3. Cloudflare determines whether a Worker should handle the request.
  4. The Worker code executes in an isolate.
  5. The Worker may:
  1. The response goes back out through Cloudflare’s edge network to the user.

For developers, the magic is that this feels close to writing a normal web handler. For operators, the magic is that they are not provisioning a machine fleet in every region to make it happen.

Why people say “no cold starts”

You will constantly hear Workers described as having “no cold starts.” Taken literally, that is too simplistic. All platforms have some notion of startup, placement, or initialization. The more accurate claim is that Workers’ isolate model can make startup overhead much smaller and less visible than in systems that must provision heavier execution environments.

Cloudflare explicitly describes Workers as leveraging isolates that can be created quickly and efficiently, helping reduce the latency penalties associated with cold starts.[1][9] And the platform has evolved here. One of the more interesting details in the X conversation came from Cloudflare co-founder Kenton Varda:

Kenton Varda @KentonVarda 2025-09-26T14:59:15Z

Believe it or not, until recently, Cloudflare Workers would just run your Worker on whatever random machine the HTTP request landed on. Isolates are so cheap, this worked fine. But we now do a little bit of routing within the LAN to make cold starts less frequent.

View on X →

That single post captures a lot. Early on, Workers could often run on whichever machine first received the request, because isolates were cheap enough that broad placement still worked well. More recently, Cloudflare added internal LAN routing to make cold starts less frequent. In other words, the platform is not static; Cloudflare is actively tuning the scheduler and placement behavior to improve real-world startup patterns.

That matters because “cold start” is not one thing. It is the emergent result of:

Workers benefits from the lightweight nature of isolates, but it also benefits from platform-level scheduling improvements.

Why isolates can be faster to start

Traditional serverless startup often involves substantial bootstrapping: starting a process, loading a runtime, initializing framework code, mounting a filesystem or image, and possibly restoring networking state. Isolates shrink this overhead because they live inside an already running engine process.[8]

That gives Cloudflare a few advantages:

This density is one reason edge deployment is economically plausible. Running code in many locations around the world is much easier when each execution unit is tiny.

Security: how can shared infrastructure be safe?

This is the obvious objection to isolate-based multi-tenancy. If many customers’ code runs on the same machine, why is that okay?

Cloudflare’s answer is that Workers is built around sandboxing and strict process-level security boundaries enforced in and around the V8 engine, with additional hardening layers.[7] The company’s security model documentation explains how Workers isolates are constrained and how access to system resources is mediated rather than exposed directly like on a normal server.[1][7]

This leads to one of the platform’s defining tradeoffs: you get less raw system access in exchange for a simpler, safer, and more globally scalable execution model. That is why Workers can feel wonderfully frictionless for web workloads and occasionally frustrating for workloads that expect full OS semantics.

Node compatibility: similar enough, but not identical

A major source of confusion is that Workers runs JavaScript and increasingly supports Node.js APIs, but it is not “you ssh into a server and run Node 24 exactly as-is.” That distinction shows up repeatedly in migration debates.

Cloudflare has expanded Node compatibility substantially over time, which is one reason migrations are accelerating.[2] But the runtime is still Workers’ runtime. It is Web-standards-oriented, isolate-based, and constrained in ways that differ from traditional long-lived Node processes. For many web apps, that difference is now small enough to be manageable. For some packages and runtime assumptions, it still matters a lot.

Why the edge part matters

If you strip away the buzzword, “edge” here means executing user-facing logic geographically closer to where requests enter the network. That can improve latency, especially for tasks like:

But edge computing is not free magic. If your Worker immediately has to call a database sitting in one region or an internal API deep in a centralized VPC, then part of the latency advantage can disappear. That is why the platform story around storage and private connectivity matters so much—which we will get to later.

The right mental model is this: Workers gives you a globally distributed control plane for application logic. It is strongest when the logic itself benefits from being close to users and when your dependencies are designed to complement that placement.

So what is Cloudflare Workers, really?

The best concise definition for practitioners is:

Cloudflare Workers is an isolate-based, globally distributed application runtime that lets you run web logic on Cloudflare’s network without managing servers or containers.

That one sentence explains why it feels different:

Once you understand that, much of the current migration wave makes more sense. Teams are not simply “moving to the edge.” They are opting into a very different compute model—one that trades some environment generality for faster deployment, lower operational overhead, and a tighter path from code to global execution.

Why Migrations Are Accelerating From Vercel, Kubernetes, and Node Servers

A year or two ago, many teams looked at Cloudflare Workers with interest but kept production workloads elsewhere. The reasons were understandable: compatibility gaps, platform limits, immature framework support, awkward database stories, and the sense that “edge runtimes” were still best for tiny handlers rather than whole systems.

That has changed. Not completely, and not for every workload, but enough that migrations have gone from fringe experiments to a recognizable pattern.

You can see it directly in the conversation:

Osintly @Osintly Thu, 01 May 2025 15:27:50 GMT

We’ve just migrated our entire infrastructure from Vercel to Cloudflare ⛅ In the spirit of transparency, here’s a breakdown of why we did it, how it went, and what it means moving forward. 👇

View on X →

And again here:

xdemocle @xdemocle Fri, 26 Sep 2025 11:46:46 GMT

☁️ 🚨 Why I Chose Cloudflare's Edge-First Stack for My B2B Marketplace New blog's post: (after months of silence 😆 ) From Next.js + Vercel to Cloudflare Workers: Why I rebuilt my entire infrastructure.

View on X →

Those are not toy examples. They reflect a broader shift in how teams evaluate hosting: not “which platform matches our current stack most closely?” but “which platform lets us simplify the stack while keeping acceptable compatibility?”

The three migration paths showing up most often

In practice, the migration wave clusters around a few repeatable paths.

1. From Vercel and Next.js hosting to Workers

This is probably the highest-profile migration pattern because Vercel has long been the default home for modern frontend teams, especially those deeply invested in Next.js.

Why move?

Cloudflare has leaned into this by making full-stack development on Workers a first-class story, with frontend, backend, and data living on the same platform.[6] That is important. The migration case is much stronger when Workers is not merely “where your frontend functions run,” but where your app can actually live.

2. From Kubernetes-hosted internal or web apps to Workers

This is an underappreciated trend. Kubernetes is incredibly powerful, but many teams are paying a complexity tax for workloads that do not need that much machinery.

The X conversation included a striking example:

rita kozlov 🐀 @ritakozlov Tue, 10 Feb 2026 19:29:12 GMT

a team at @cloudflare just moved an internal nextjs app that used to run in k8s to cloudflare workers + workers vpc for the bits that need to connect to core services

this happened in a few hours! few big takeaways:

1. this is the year of migration & rewrites. it's happening!

2. if you don't know where to start with cloudflare workers because you still have things running in cloud / on prem, workers vpc is a good place to get started!

3. if you tried workers a long time ago, now is a good time to try it again. it has gotten so much richer with support for node js, higher limits, ability to connect to internal services, new products like queues, workflows, pipelines, etc

View on X →

That post matters because it points to something broader than “edge-native startup app” momentum. It suggests established teams with existing cloud and internal-service dependencies are using Workers as a serious migration target, even when they are not rewriting everything at once.

The enabling piece here is Workers VPC and related connectivity features: teams can move the user-facing or latency-sensitive layer to Workers while still reaching private or centralized systems that remain elsewhere.[2] This softens the migration cliff. You no longer have to choose between “all on Workers” and “not at all.”

3. From Express-style Node servers to Hono on Workers

This is the grassroots migration path: developers with conventional Node HTTP apps looking for a lightweight server framework that maps well onto the Workers model.

That is why this post resonated:

abdullah @abdullah_twt23 Fri, 10 Oct 2025 12:12:11 GMT

Urgent!!!
Anyone used Hono + Cloudflare workers. Need A green flag for this.

Urgently have to migrate from express to hono serverless.

View on X →

Hono has become popular in the Workers ecosystem because it gives developers an Express-like or minimalist web framework feel while targeting runtimes like Workers cleanly. The appeal is obvious: keep familiar routing and middleware concepts, but drop the server ownership burden.

Why migrations are easier now than they used to be

The biggest reason migration momentum is real in 2026 is that Workers is no longer the same product many people tried a few years ago.

Cloudflare’s docs and platform updates show a much broader, richer runtime and platform story than the early “small JavaScript at the edge” era.[2] The differences that matter most for migration are:

Put simply: earlier versions of Workers often asked teams to contort their apps into the platform. Today, the platform bends much more toward real application requirements.

The company’s own migration narratives reinforce this. Cloudflare has published examples of organizations reducing complexity and cost by moving workloads onto its platform, including significant cloud cost reductions in some cases.[12] Those stories should always be read critically—vendor case studies are marketing—but they align with the practical logic teams are discussing publicly: fewer moving parts, less egress, and more integrated infrastructure.

The real motivators behind migration

When practitioners describe these moves, four motivators show up repeatedly.

Cost

Workers can be attractive when compared with stacks that accumulate costs across multiple layers:

Consolidation does not always mean cheaper, but it often means more legible cost structure. And for some teams, especially those building globally distributed apps, that matters as much as the absolute bill.

Simpler operations

This is the strongest motivator. Teams are tired of infrastructure assembly work. If Workers lets them remove CI complexity, container pipelines, reverse proxies, or regional fleet management, that operational simplicity can outweigh some runtime compromises.

Better global delivery

Workers turns global rollout from a special project into the default platform behavior.[2] If your app serves users across geographies, that is not a minor feature. It changes architecture decisions upstream.

Stack consolidation

Cloudflare’s strategy is increasingly persuasive when adopted as a system: compute, storage, cache, static assets, internal service communication, and edge networking under one roof.[6] Even teams that would not pick Workers as a standalone function runtime may pick it as part of an integrated stack.

Why this is more than hobbyist enthusiasm

A good litmus test for hype is whether people are migrating toward the platform from already-working setups. That is what we are seeing. Not universally, and not without tradeoffs, but enough to call it a real trend.

The strongest evidence is not that people say Workers is cool. It is that they are moving off systems that were already viable—Vercel, Kubernetes, Express servers—and deciding the rewrite or migration cost is worth paying.

That only happens when the target platform has crossed a credibility threshold.

Workers appears to have crossed it.

Why Workers Is More Than Functions: The Platform Story Teams Are Buying Into

If you still think of Cloudflare Workers as “small edge functions,” you are missing the reason many teams are adopting it seriously.

What developers are buying into is not a single compute primitive. It is an integrated platform where compute, data, communication, and connectivity increasingly fit together without forcing users to stitch together five vendors and three network boundaries.

That is why this post landed so well:

Gabriel Massadas @G4brym Sun, 18 Jan 2026 22:59:12 GMT

I built a self-hosted Sentry clone that runs entirely on Cloudflare Workers, and I think it showcases one of the most underrated features in the Cloudflare ecosystem: Service Bindings. Let me explain why this matters. When you have multiple Cloudflare Workers (an API, a webhook handler, a cron job), they all need common things: error tracking, authentication, rate limiting, metrics. The typical solution? External HTTP calls to third-party services. That means: - 50-200ms latency per call - Egress fees - Your data leaving your infrastructure - Another vendor to manage Service bindings let Workers call each other directly inside Cloudflare's network. No HTTP. No internet. Just internal RPC with <5ms latency. With Workers Sentinel, any Worker in my account can just point Sentry-SDK into the Service binding, and have all errors flow into one centralized dashboard, stored in Durable Objects with SQLite. No external calls. No added latency. Service bindings aren't just for error tracking. You can centralize: 🔐 Authentication — One Worker that validates tokens for all your services 📊 Metrics — Centralized collection without external observability costs 🚦 Rate Limiting — Shared counters that actually work across Workers 🚩 Feature Flags — Instant propagation, no deployment needed Think of it as building your own internal microservices mesh, but at the edge, with zero network overhead. Workers Sentinel uses two Durable Objects: - AuthState (singleton) — users, sessions, projects - ProjectState (per-project) — issues, events, stats Events are fingerprinted and grouped intelligently. The dashboard is a Vue.js app served from the same Worker. I could say i built this to learn Durable Objects or that I needed error tracking for side projects, but honestly I just need a way to show my wife why I'm sending $200/month to some guy named Claudio who apparently helps me write code. The whole thing is open source. Deploy it to your Cloudflare account, point your Sentry SDKs at it, and you're done. But more importantly: take a closer look at service bindings. They're the glue that turns a collection of Workers into an actual platform. Most Cloudflare customers I talk to aren't using them, and they're missing out. To the Sentry team: I love your work. Genuinely. Sentry is battle-tested, has incredible features, and is what you should use for anything that matters. This project is a toy. A learning exercise. A weekend hack that got slightly out of hand. Please do not trust your production errors to this dummy clone. If your startup goes down at 3 AM because Workers Sentinel missed an edge case, that's on you. I warned you. Use the real thing. But if you want to learn about Durable Objects, service bindings, and how error tracking works under the hood? Clone away. Your Workers shouldn't be islands. Connect them.

View on X →

The key concept there is Service Bindings.

Service Bindings: internal communication without the usual overhead

Service Bindings allow one Worker to call another through Cloudflare’s platform-native mechanism rather than treating every internal interaction like an external HTTP call.[6] For practitioners, that matters for three reasons:

  1. Latency: internal calls avoid some of the normal overhead of public HTTP networking
  2. Cost: less egress and less dependence on external vendors
  3. Architecture: you can decompose an application into platform-native services without creating a mess of public endpoints

In conventional cloud setups, internal service communication often drags in a lot of infrastructure:

Workers gives teams a lighter-weight internal composition model. For edge-native applications, that can be a major simplifier.

D1, KV, and R2: the “full-stack on Workers” pitch

Cloudflare’s full-stack Workers story depends on data products, not just code execution.[6]

Here is the rough division of labor:

None of these should be treated as universal replacements for every database or storage product. But together they let teams keep far more of the application inside one environment.

That is what people mean when they say Workers feels production-ready now. Not that every feature is perfect, but that you can plausibly build an application with:

—all without immediately exiting the platform.

This post captures the interest from developers thinking exactly in those terms:

Giuseppe @giuseppegurgone Fri, 30 Jun 2023 19:27:44 GMT

Choosing @DrizzleOrm over Prisma enables a few interesting things including:

- Host on Cloudflare Workers
- Put a DB cache at the Edge like @PolyScaleAi in front of the DB

Can’t wait to attempt a migration on my hobby project

View on X →

The Drizzle-versus-Prisma mention is telling. Runtime choice increasingly affects ORM choice, package choice, and architecture choice. Workers is not just a host; it shapes the software stack around it.

Workers VPC: the bridge for real enterprises

A platform feels complete not when greenfield apps love it, but when brownfield teams can adopt it incrementally.

That is where Workers VPC matters. It helps teams connect Workers-based applications to private or core services that still live in cloud VPCs or on-prem-style environments, making phased migration much more realistic.[2] Instead of demanding that every dependency become edge-native on day one, Cloudflare lets Workers operate as a front door and orchestration layer for systems that remain partly centralized.

This is strategically significant. It changes Workers from an all-or-nothing proposition into a migration layer.

For example, a team might:

That lowers migration risk dramatically.

Why the platform story matters more than raw runtime specs

Developers do not adopt compute in isolation. They adopt systems that reduce the number of boundaries they have to manage.

A fast runtime with weak storage and awkward networking is not enough. A globally distributed handler with no good internal composition model is not enough. A nice edge story that collapses the moment you need private connectivity is not enough.

Workers is increasingly compelling because the surrounding platform fills those gaps.

And that, more than benchmark arguments alone, is why the conversation has shifted. The market is no longer asking, “Can Cloudflare run code?” It is asking, “Can Cloudflare replace enough of my stack to make my life simpler?”

For a growing number of teams, the answer is yes.

Performance Reality Check: Where Workers Shines, Where It Has Been Criticized, and What Changed

No topic in the Workers conversation generates more heat than performance. And to understand the platform honestly, you have to hold two ideas in your head at once:

  1. Workers can deliver excellent real-world performance for many edge-oriented workloads.
  2. Workers has also faced legitimate criticism around CPU performance, consistency, and runtime behavior in some scenarios.

Those statements are not contradictory.

The criticism was real

The sharpest public critique in the conversation came from Vercel CEO Guillermo Rauch:

Guillermo Rauch @rauchg Sat, 04 Oct 2025 16:13:45 GMT

Vercel Fluid vs Cloudflare Workers.

💬"From my findings, Vercel is 1.2x to 5x faster than CloudFlare for server rendering."

We gave a very, very earnest try to Workers when we explored the edge runtime / world. There's no "beef", we had to migrate off for technical reasons.

To be fair to them, they brought new ideas to the market. The CPU-based pricing for instance was good and Vercel Fluid has it as well.

The main issues we ran into:
1️⃣ Bad CPU performance, low capacity, very irregular and spiky latency. The benchmarks show you this.
2️⃣ Single-vendor runtime. You can't run "Node.js 24". You run "whatever they give you, which is trying to look like Node.js but it's not"
3️⃣ Really bad connectivity to the clouds. We measured the roundtrip time between AWS and CF Workers as being in the low 10s to 100s of milliseconds

The result of us migrating off was shipping Fluid. You pay for CPU, it handles concurrency like a server (cost-efficient), you control the size / memory of the functions, you get full, open runtimes like @nodejs and Python, you get 1ms latency to cloud services…

Most people today are using Fluid and they don't even notice, because it just worksÂŽ with the entire ecosystem.

Here are the benchmarks @theo ran:

View on X →

You do not have to agree with every implication there to see why the post resonated. It names the three most common concerns advanced teams raise when evaluating Workers against Node-based or container-based alternatives:

Those are not superficial objections. They go directly to whether Workers can replace an existing production environment or only complement it.

Historically, some teams did experience Workers as irregular under CPU pressure or less predictable for heavy server rendering. Others found the runtime close to Node, but not close enough for their dependency graph or operational assumptions. And for apps still deeply tied to AWS-hosted services or regional databases, edge execution could expose network path penalties rather than eliminate them.

But Cloudflare’s fixes also appear real

What changed—and why the current discussion feels different—is that Cloudflare publicly investigated benchmark gaps and published a detailed explanation of what it found and fixed.[1] The company did not just wave away complaints; it identified concrete causes, including request scheduling behavior, outdated V8 garbage collector settings, excess buffer copying, stream configuration problems, and even an upstream V8 optimization opportunity.[1]

That technical specificity matters because it suggests the gap was not reducible to “isolates are fundamentally slower.” Some of it came from platform implementation details, queueing behavior, benchmark configuration mismatches, and runtime tuning decisions that could actually be corrected.

The practitioner summary making the rounds captured that pretty well:

vaish @wishee0 Tue, 14 Oct 2025 20:37:47 GMT

saw lots of people mentioning that their workers suddenly got faster after @theo's video, looks like this blog explains it. how cloudflare closed a 3.5x performance gap, tl;dr: - fixed request scheduling during cpu bursts (biggest impact - wasn't even cpu speed, just bad queueing) - updated v8 garbage collector from 2017 settings (+25% boost - 8 year old config still in prod is kinda crazy) - removed unnecessary buffer copies in opennext (50 x 2kb buffers per request - classic death by a thousand cuts lmao) - switched to byte streams with proper highwatermark (4096 - interesting to know that defaults matter way more what i used to think) - patched v8 json.parse with reviver (+33%, upstreamed to chrome 143 - fixing v8 itself is pretty insane ngl) - fixed missing node_env=production in react ssr benchmark (dev mode in prod benchmarks... oops 👀👀) - enabled force-dynamic in next.js config for proper streaming (config mismatches pretty much suck to debug) - fixed node.js slow trig functions (3x faster, benefits everyone - they literally fixed their competitor's platform lmfao) now performs on par with @vercel on pretty much all benchmarks except next.js (gap significantly closed, work ongoing!!!!!)

View on X →

Cloudflare itself also amplified the benchmark investigation and fixes:

Cloudflare @Cloudflare 2025-10-14T20:07:06Z

Cloudflare investigated CPU performance benchmark results for Workers, uncovering and fixing issues, making Cloudflare Workers faster for all customers.

https://blog.cloudflare.com/unpacking-cloudflare-workers-cpu-performance-benchmarks/?utm_campaign=cf_blog&utm_content=20251014&utm_medium=organic_social&utm_source=twitter/

View on X →

If you strip away social media dunking, the takeaway is nuanced but important:

That does not mean all concerns are gone. It means old assumptions deserve re-testing.

Edge latency and CPU throughput are different questions

A common mistake in platform evaluation is blending two separate performance dimensions into one.

1. Network proximity and user-perceived latency

Workers is very strong here. If logic executes near where requests enter the network, users can get faster first-byte times and lower request overhead for edge-suitable operations.[2] This matters for auth, redirects, personalization, rendering at the edge, and streaming.

2. Raw compute performance and sustained heavy workloads

This is where the picture gets more conditional. Workers can do meaningful computation, but isolate-based edge runtimes are not automatically the best answer for CPU-heavy jobs, long-lived processes, or workloads that need extensive memory and system control. The architecture is optimized for fast, dense, globally distributed execution—not for every class of server task.

That distinction is the key to sane evaluation. A platform can be the fastest way to serve globally distributed user requests and still be the wrong place to do your most compute-intensive backend processing.

Why queueing and scheduling matter more than people think

One of the most interesting lessons from Cloudflare’s benchmark post is that perceived CPU slowness was partly a scheduling problem.[1] In distributed systems, users often experience “performance” as a combination of:

In other words, a platform can look slow because requests are waiting badly, not because the processor itself is weak.

That is important for practitioners because it changes what to measure. If you are comparing Workers with a regional Node server or a container platform, you should not only ask:

You should also ask:

These are operational questions, not just benchmark questions.

Streaming is one of Workers’ clearest performance wins

Where Workers tends to shine most clearly is in workloads where responsiveness matters more than monolithic completion time—especially streaming and incremental delivery.

If your app can start sending useful bytes early, edge placement and proper streaming support can produce a much better user experience even when total server work is nontrivial. This is especially relevant for:

That is why framework and platform support for byte streams, buffering behavior, and watermarks became part of the recent performance conversation.[1] It is not implementation trivia. It directly affects perceived speed.

The cloud-connectivity objection is still real

Rauch’s third point—connectivity to the clouds—remains one of the more serious decision factors. If your Worker sits at the edge but must call an AWS service in a region on every request, the network path can dominate your latency budget. In some architectures, this largely cancels out the edge advantage.

This is why the best Workers architectures usually do one of three things:

  1. keep enough logic and data on-platform to avoid centralized round trips
  2. use private connectivity features to reduce integration friction
  3. reserve Workers for request handling, auth, caching, or orchestration while leaving centralized heavy lifting elsewhere

If you ignore this, you can end up with an edge-shaped architecture that still behaves like a cross-cloud hairpin.

What experts should benchmark now

If your team is evaluating Workers seriously in 2026, benchmark current reality, not forum memories from 2022 or 2023.

Specifically test:

Do not rely on toy hello-world tests, and do not rely on vendor-neutrality rhetoric from any side. Workers, Vercel Fluid, Node servers, Fly.io, and container platforms each optimize for different things.

The honest verdict on performance

Here is the clearest way to frame it:

That is the real state of play. The performance debate is no longer “Workers is amazing” versus “Workers is slow.” The serious question is whether Workers’ performance profile matches your workload profile.

For a growing set of applications, it does.

Best-Fit Workloads: Where Cloudflare Workers Delivers the Most Value

The easiest way to understand Workers is to stop asking whether it can do everything and start asking where it creates outsized leverage.

The platform’s sweet spot is not “all backend computing.” It is workloads that benefit from three properties simultaneously:

When those line up, Workers is unusually compelling.

Streaming responses and incremental delivery

One of the strongest fits is streaming.

Workers supports streaming responses directly, which matters for modern applications that do not want to hold the entire response in memory or wait for all work to finish before sending anything to the client.[2] This is especially important for AI interfaces, server rendering, and real-time-feeling applications.

The X conversation captured the operational significance well:

JosĂŠ Manuel DĂ­az @jmarellanes 2026-03-12T21:57:05Z

The catch: infrastructure needs to support chunked responses.

Most Node servers handle this fine.
But some proxies, CDNs, and serverless platforms buffer everything anyway.

AWS Lambda needs response streaming mode enabled.
Cloudflare Workers and Vercel Edge support it out of the box.

View on X →

That “out of the box” support is not a small detail. Streaming is one of those capabilities that sounds standard until you encounter a platform or proxy that buffers everything and destroys the user experience. Workers’ edge location plus streaming support makes it a natural fit for applications where time-to-first-byte matters more than just total completion time.

Edge rendering and latency-sensitive web apps

Workers is also a strong choice for edge rendering—serving or rendering content closer to users rather than centralizing all web response generation in one region.

That is why this blunt recommendation resonates:

Divanshu Chauhan (divkix) @Divkix 2026-03-11T20:43:32Z

They should move to cloudflare workers for edge rendering

View on X →

Not every app needs edge rendering. But when users are globally distributed and page generation, personalization, auth, or route logic sits in the request path, moving that work closer to the user can improve responsiveness meaningfully.

APIs, middleware, auth, rate limiting, and webhooks

Workers is arguably at its most intuitive when used for request-path logic:

These patterns fit the model well because they are:

This is where the platform can replace a lot of “small but annoying” infrastructure with a simpler deployment model.

Lightweight integrations and automation

Another theme emerging in the conversation is that Workers lowers the barrier to building custom integrations quickly—even for users who are not traditional backend engineers.

˗ˏˋ Jesse Hanley ˎˊ˗ @jessethanley Mon, 09 Mar 2026 04:25:34 GMT

A year ago this non-technical user might of churned unless we built the integration they were after. Now they are slinging together Cloudflare Workers, coding, and bringing things live. Much to think about on this.

View on X →

That post points to something larger than no-code enthusiasm. Workers sits at an interesting intersection: simple enough to deploy quickly, powerful enough to connect services, and globally available by default. That makes it attractive for:

In older infrastructure models, these jobs often end up overbuilt because the platform choice itself drags in CI, hosting, observability, and deployment complexity. Workers changes the economics of small software.

Entire products, not just request rewrites

The conversation also makes clear that people are building whole products on Workers, not just CDN handlers or experimental edge scripts.

That includes:

Cloudflare’s own full-stack positioning reflects this shift.[6] When paired with D1, KV, R2, Durable Objects, and Service Bindings, Workers can support architectures that would previously have required a more traditional backend stack.

A practical shortlist of best-fit use cases

If you want the simple checklist, Workers is often a very good fit for:

The throughline is simple: Workers delivers the most value when network position and operational simplicity matter as much as raw compute.

Trade-Offs, Limits, and When Workers Is the Wrong Tool

The strongest case for Workers is also the source of its main limitations.

Because Workers is an isolate-based, platform-constrained runtime, it cannot be everything a full Node environment or container platform is. And Cloudflare’s own roadmap now makes that explicit.

Compatibility is good—just not universal

Workers has gotten much better at supporting Node-style applications and packages, but it is still not identical to a normal server or unrestricted Node runtime.[2][9] If your application depends on:

—you may hit friction.

This does not mean Workers is immature. It means the platform has a specific execution model and security boundary. The more your app expects traditional server semantics, the less elegant the fit becomes.

Some workloads simply want containers

Longer-running jobs, high-memory processes, browser automation at scale, specialized runtimes, and compute-heavy tasks can outgrow the Worker model. Cloudflare’s introduction of Containers is revealing here—not as a contradiction, but as an admission that isolates have boundaries.

Dane Knecht 🦭 @dok2001 Tue, 24 Jun 2025 18:21:14 GMT

Big day for @Cloudflare as we launch our newest compute primitive, Containers! A bit of history: In 2020, we acquired S2 Remote Browser tech and faced the challenge of migrating it from AWS to our edge. To run a Chromium-based browser securely, we split our team: half focused on the Remote Browser app, half built a robust, independent container platform to support it. We knew some workloads needed to be close to users but didn’t fit our Workers isolate model. This platform became a game-changer, empowering dozens of internal teams to build features like Workers CI/CD, Browser Rendering, Key Transparency, Workers AI, and more. But we kept asking: Is this the right primitive for our users? Workers remains the go-to for globally distributed, effortlessly scalable compute at a great price. Initially, many use cases we heard were for single-node webservers that didn’t need region earth. Then we got excited as users started asking for latency-sensitive, real-time applications and the ability to run agents close to the users they serve. Cloudflare Containers are here to deliver for those high-performance, user-proximal workloads. Excited to see what you build with it!

View on X →

That post is unusually candid. It says, in effect: Workers remains the default for globally distributed scalable compute, but some user-proximal workloads do not fit the isolate model. That is exactly right.

Containers exist because some applications need:

When not to use Workers

Workers is usually the wrong primary runtime if your core workload is:

It can still play a role in front of these systems—for auth, caching, request routing, or edge mediation—but it may not be the right place to run the workload itself.

The practical takeaway

Do not treat Workers as a religion. Treat it as a very strong platform with a distinct shape.

Use it where its shape matches the problem:

Do not force it onto workloads that are really asking for container semantics.

Cloudflare seems to understand this better now than many enthusiasts do, which is a healthy sign for the platform.

Where the Platform Is Heading and How to Decide If Your Team Should Switch

The roadmap signals are pretty clear: Workers is becoming Cloudflare’s center of gravity.

That is visible both in product development and in how practitioners talk about the platform. The ecosystem is consolidating around Workers as the default application model, not one option among many.

Ronan Berder @hunvreus Wed, 11 Mar 2026 09:43:32 GMT

Although to be fair, Cloudflare made it clear they're moving everything over to Workers. Pages is EOL.

View on X →

Whether “Pages is EOL” is interpreted narrowly or broadly, the direction is hard to miss: Cloudflare wants developers building on Workers primitives.

What that means strategically

It means past objections may have shorter shelf lives than they used to. The platform is evolving fast, and Cloudflare is clearly investing in:

That combination makes Workers more adoptable, not less. It no longer asks teams to bet on a narrow edge-function niche. It asks them to consider Cloudflare as an application platform.

Who should switch now?

A simple decision matrix helps.

Strong candidate to switch now

Pilot first

Probably stay hybrid or use containers

How to evaluate safely

Do not start with your hardest workload. Start with a slice that reveals the platform’s strengths and your integration risks:

  1. a webhook service
  2. an auth or API gateway layer
  3. a streaming endpoint
  4. an internal app with modest backend complexity
  5. a globally distributed read-heavy API

Measure:

The teams getting the most from Workers are not necessarily the ones rewriting everything immediately. They are the ones choosing workloads where Workers changes the economics of shipping.

And that is the best way to think about the platform in 2026: not as a universal replacement for all compute, but as one of the clearest answers to a modern engineering problem—

how to ship globally distributed software without inheriting infrastructure drag as your real product.

Sources

[1] How Workers works — Cloudflare Developers

[2] Overview ¡ Cloudflare Workers docs

[3] Cloudflare is giving developers programmable access to the network edge with new service — TechCrunch

[4] The Ultimate Guide to Cloudflare Workers | by Caleb Rocca

[5] cloudflare/workers-sdk: Home to Wrangler, the CLI for Cloudflare Workers — GitHub

[6] Your frontend, backend, and database — now in one Cloudflare Worker — Cloudflare Blog

[7] Security model - Workers - Cloudflare Docs

[8] Safe in the sandbox: security hardening for Cloudflare Workers — Cloudflare Blog

[9] Fine-Grained Sandboxing with V8 Isolates - InfoQ

[10] How Cloudflare Workers Leverage V8 Isolates for Efficient Serverless Computing

[11] Cloud Computing Beyond Containers: How Cloudflare's Isolates Are Changing the Game

[12] 80% lower cloud costs: How Baselime moved from AWS to Cloudflare — Cloudflare Blog

[13] Cloudflare Workers scale too well and broke our infrastructure, so we switched to Cloudflare — Cloudflare Blog

[14] My Cloudflare Workers Migration: The Good, the Bad, and the Confusing

[15] The Migration of Legacy Applications to Workers — Cloudflare Blog

Further Reading