Zero-Egress Architecture with Cloudflare R2#

Every major cloud provider charges you to download your own data. AWS S3 charges $0.09/GB. Google Cloud Storage charges $0.12/GB. Azure Blob charges $0.087/GB. These egress fees are the most unpredictable line item on cloud bills – they scale with success. The more users download your data, the more you pay.

Cloudflare R2 charges $0 for egress. Zero. Unlimited. Every download is free, whether it is 1 GB or 100 TB. R2 uses the S3-compatible API, so existing tools and SDKs work without changes. This single pricing difference changes how you architect storage, serving, and cross-cloud data flow.

The Egress Cost Problem#

Egress fees create perverse incentives. They penalize you for letting users access data. They make multi-cloud architectures expensive because moving data between clouds costs money in both directions. They make cost estimation difficult because download volume depends on user behavior you cannot predict.

Egress Pricing Comparison#

Provider Storage/GB/month Egress/GB Free Egress Free Storage
Cloudflare R2 $0.015 $0.00 Unlimited 10 GB
AWS S3 $0.023 $0.09 100 GB/mo (free tier, 12 months) 5 GB (12 months)
Google Cloud Storage $0.020 $0.12 1 GB/mo 5 GB
Azure Blob $0.018 $0.087 5 GB/mo 5 GB (12 months)
Backblaze B2 $0.006 $0.01 3x storage/mo 10 GB

What Egress Actually Costs at Scale#

Scenario S3 Cost GCS Cost Azure Cost R2 Cost
100 GB served/month $9.00 $12.00 $8.70 $0
1 TB served/month $90.00 $120.00 $87.00 $0
10 TB served/month $900.00 $1,200.00 $870.00 $0
100 TB served/month $8,500.00 $10,000.00 $8,100.00 $0

At 10 TB/month of downloads, S3 egress alone costs $900/month – more than most startups spend on all other infrastructure combined. R2 eliminates this entirely.

R2 Architecture Patterns#

Pattern 1: Direct Asset Serving#

Serve files directly from R2 through a Worker or via a public bucket with a custom domain. Every download is free.

// Worker that serves files from R2 with caching headers
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url);
    const key = url.pathname.slice(1); // remove leading /

    const object = await env.BUCKET.get(key);
    if (!object) {
      return new Response("Not found", { status: 404 });
    }

    const headers = new Headers();
    headers.set("Content-Type", object.httpMetadata?.contentType || "application/octet-stream");
    headers.set("Cache-Control", "public, max-age=86400");
    headers.set("ETag", object.httpEtag);

    return new Response(object.body, { headers });
  },
};

For simpler setups, enable public access on the R2 bucket and attach a custom domain through the Cloudflare dashboard. No Worker needed.

Pattern 2: Cross-Cloud Staging Layer#

R2’s zero egress makes it the ideal staging layer for multi-cloud architectures. Upload data from any cloud to R2 (ingress is free everywhere), then download from R2 as many times as needed for free.

┌──────────────┐       ┌──────────────┐       ┌──────────────┐
│  AWS Service │       │   R2 Bucket  │       │  End Users   │
│  generates   │──────▶│   (staging)  │──────▶│  download    │
│  report.pdf  │ $0.09 │              │  $0   │  report.pdf  │
│              │ /GB   │              │       │              │
└──────────────┘       └──────────────┘       └──────────────┘
                        ▲
┌──────────────┐       │
│ GCP Service  │───────┘
│  generates   │ $0.12/GB (one-time upload)
│  data.csv    │
└──────────────┘

You pay egress once to upload from the source cloud to R2. Every subsequent download from R2 is free. For data downloaded multiple times, the savings multiply with each download.

Pattern 3: Template and Artifact Caching#

For content that many users download (templates, packages, build artifacts, documentation PDFs), R2 eliminates per-download costs entirely.

Scenario S3 (per download) R2 (per download) 100 Downloads
10 MB template $0.0009 $0 S3: $0.09, R2: $0
100 MB artifact $0.009 $0 S3: $0.90, R2: $0
1 GB dataset $0.09 $0 S3: $9.00, R2: $0

Pattern 4: Presigned URLs for Direct Upload#

Let users upload files directly to R2 without routing through your Worker. Generate a presigned URL server-side, return it to the client, and the client uploads directly.

// Generate a presigned upload URL (requires S3-compatible client)
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";

const s3 = new S3Client({
  region: "auto",
  endpoint: "https://<ACCOUNT_ID>.r2.cloudflarestorage.com",
  credentials: {
    accessKeyId: env.R2_ACCESS_KEY,
    secretAccessKey: env.R2_SECRET_KEY,
  },
});

const url = await getSignedUrl(s3, new PutObjectCommand({
  Bucket: "my-bucket",
  Key: `uploads/${userId}/${filename}`,
  ContentType: contentType,
}), { expiresIn: 3600 });

return Response.json({ uploadUrl: url });

Pattern 5: Public Bucket with Custom Domain#

For fully public content (documentation, open datasets, public assets), configure R2 as a public bucket with a custom domain:

  1. Create an R2 bucket in the Cloudflare dashboard
  2. Enable public access under Settings > Public Access
  3. Add a custom domain (e.g., assets.example.com)
  4. Cloudflare handles SSL, caching, and DDoS protection automatically

Users access files at https://assets.example.com/path/to/file.pdf – served from Cloudflare’s CDN, zero egress.

R2 Configuration with Wrangler#

// wrangler.jsonc
{
  "r2_buckets": [
    {
      "binding": "ARTIFACTS",
      "bucket_name": "my-artifacts",
      "preview_bucket_name": "my-artifacts-preview"
    }
  ]
}
// TypeScript binding
export interface Env {
  ARTIFACTS: R2Bucket;
}

// Upload
await env.ARTIFACTS.put("key", data, {
  httpMetadata: { contentType: "application/json" },
  customMetadata: { uploadedBy: "worker-sync" },
});

// Download
const obj = await env.ARTIFACTS.get("key");

// List with prefix
const list = await env.ARTIFACTS.list({ prefix: "reports/", limit: 100 });

// Delete
await env.ARTIFACTS.delete("key");

// Head (metadata only, no body download)
const head = await env.ARTIFACTS.head("key");

R2 vs S3: API Compatibility#

R2 implements the S3 API. Most S3 tools and SDKs work without changes. Switch the endpoint URL and credentials, keep everything else.

Feature S3 R2 Compatible?
GET/PUT/DELETE objects Yes Yes Yes
Multipart upload Yes Yes Yes
Presigned URLs Yes Yes Yes
Bucket lifecycle policies Yes Yes Yes
Object versioning Yes Yes (beta) Mostly
S3 Select (query in place) Yes No No
S3 event notifications Yes Event notifications (different API) Partial
Cross-region replication Yes Not needed (global by default) N/A
Glacier/archive tiers Yes No (single tier) No
IAM policies Yes R2 API tokens Different

Migrating from S3 to R2#

Using rclone#

rclone is the standard tool for migrating between S3-compatible storage providers. It supports incremental sync, parallel transfers, and bandwidth limiting.

# Configure rclone for S3 source
rclone config create s3source s3 \
  provider=AWS \
  access_key_id=AKIA... \
  secret_access_key=... \
  region=us-east-1

# Configure rclone for R2 destination
rclone config create r2dest s3 \
  provider=Cloudflare \
  access_key_id=... \
  secret_access_key=... \
  endpoint=https://<ACCOUNT_ID>.r2.cloudflarestorage.com

# Sync entire bucket (incremental -- only copies new/changed files)
rclone sync s3source:my-bucket r2dest:my-bucket --progress

# Copy specific prefix
rclone copy s3source:my-bucket/reports/ r2dest:my-bucket/reports/ --progress

# Dry run first
rclone sync s3source:my-bucket r2dest:my-bucket --dry-run

Migration Checklist#

  1. Create R2 bucket via Cloudflare dashboard or Wrangler
  2. Generate R2 API token with read/write permissions
  3. Run rclone sync with --dry-run first to verify
  4. Update application code to use R2 endpoint (change endpoint URL, keep S3 SDK)
  5. Update DNS if using a custom domain for asset serving
  6. Verify downloads work from R2 before decommissioning S3
  7. Monitor R2 metrics in Cloudflare dashboard for the first week
  8. Decommission S3 after confirming R2 serves all traffic correctly

What Changes in Your Code#

// Before: S3
const s3 = new S3Client({
  region: "us-east-1",
  // uses default AWS credentials
});

// After: R2 (only endpoint and credentials change)
const s3 = new S3Client({
  region: "auto",
  endpoint: "https://<ACCOUNT_ID>.r2.cloudflarestorage.com",
  credentials: {
    accessKeyId: env.R2_ACCESS_KEY,
    secretAccessKey: env.R2_SECRET_KEY,
  },
});

// All S3 commands (GetObject, PutObject, ListObjects) work unchanged

When NOT to Use R2#

R2 is not the right choice for every storage workload:

  • Low-latency random reads of small objects: KV is faster for key-value lookups under 25 MB. R2 is optimized for larger objects and sequential access.
  • Database workloads: R2 is object storage, not a database. Use D1, managed Postgres, or DynamoDB for structured queries.
  • Region-pinned compliance: R2 stores data globally by default. If you need data to stay in a specific region for GDPR or other compliance, verify R2’s data localization options or use a region-specific provider.
  • Archive/cold storage: R2 has a single storage tier at $0.015/GB/mo. S3 Glacier ($0.004/GB/mo) or Azure Archive ($0.002/GB/mo) are cheaper for data you rarely access. R2 Infrequent Access ($0.01/GB/mo) exists but is still more expensive than archive tiers.
  • Complex event-driven workflows: S3 event notifications trigger Lambda functions. R2 event notifications exist but the ecosystem of triggers and integrations is smaller.

Cloud Vendor Storage Comparison#

Capability Cloudflare R2 AWS S3 GCP Cloud Storage Azure Blob
Egress cost $0 $0.09/GB $0.12/GB $0.087/GB
Storage cost $0.015/GB/mo $0.023/GB/mo $0.020/GB/mo $0.018/GB/mo
Free storage 10 GB (permanent) 5 GB (12 months) 5 GB 5 GB (12 months)
Archive tier Infrequent Access ($0.01/GB) Glacier ($0.004/GB) Coldline ($0.004/GB) Archive ($0.002/GB)
Regions Global (auto) 30+ regions (choose) 35+ regions (choose) 60+ regions (choose)
API S3-compatible Native S3 GCS API (+ S3 interop) Blob API
CDN integration Built-in (Cloudflare CDN) CloudFront (separate) Cloud CDN (separate) Azure CDN (separate)
Max object 5 TB 5 TB 5 TB 190 TB (block blob)
Versioning Beta GA GA GA
Lifecycle rules Yes Yes Yes Yes
Encryption At rest (AES-256) At rest + KMS At rest + CMEK At rest + CMK

The key differentiators: R2 wins on egress cost and CDN integration. S3 wins on ecosystem maturity, archive tiers, and event-driven integrations. GCS and Azure fall between on most dimensions.

For workloads where data is written once and read many times – static assets, documentation, packages, reports, public datasets – R2’s zero egress makes it the clear cost winner. For workloads requiring archive tiers, complex lifecycle policies, or deep cloud-native integrations, S3 remains the more capable option.