Free Stack Ledger

Track free compute, storage, and networking quotas so we stay inside provider generosity.

Quota Matrix

Provider Primitive Free Tier Usage Reset Console API Quota Check Status Notes
GitHub Actions Compute 2k–3k min/mo monthly github.com REST billing track Use cache + short jobs
GitHub Pages Static hosting Free n/a repo settings docs billing stable Use for public surfaces
Cloudflare Workers Edge compute 100k req/day daily dashboard API usage track Use as gateway/auth
Cloudflare Pages Static hosting Free n/a dashboard API usage stable Preview builds
Cloudflare R2 Object storage 10GB + free egress monthly dashboard API usage track Artifacts + datasets
Supabase Postgres 500MB + free auth monthly console API billing track Use for SQL surfaces
Fly.io VMs 3 shared VMs monthly dashboard API usage track Stateful services
Hugging Face Model storage Free repos n/a hub API billing stable Model artifacts
Vercel Edge + hosting Hobby tier monthly dashboard API usage track Frontend previews
Netlify Static hosting Starter tier monthly dashboard API usage track Static + previews
Render Compute Free web services monthly dashboard API usage track Background tasks
Railway Compute Free credits monthly dashboard API usage track Compute fallback
Neon Postgres Free DB monthly console API usage track SQL fallback
Turso SQLite edge Free DB monthly console API usage track Edge SQL
Upstash Redis/Queue Free requests monthly console API usage track Queue + cache
Firebase Auth/DB Spark tier monthly console API usage track Auth fallback
Deno Deploy Edge compute Free tier monthly dashboard docs usage track Edge fallback
Cloudflare D1 SQL edge Free tier monthly dashboard docs usage track Edge SQL layer
Cloudflare KV KV store Free tier monthly dashboard docs usage track Edge config
Oracle Cloud VMs Always free monthly console API usage track VM reserve
GCP Free Tier Compute/Storage Monthly credits monthly console APIs billing track Secondary cloud
Azure Free Compute/Storage Monthly credits monthly portal APIs billing track Secondary cloud
Cloudflare Queues Queues Free tier monthly dashboard docs usage track Async buffer
Cloudflare Durable Objects Stateful edge Free tier monthly dashboard docs usage track Consistency layer
Supabase Edge Functions Edge compute Free tier monthly console docs billing track Edge fallback
Fly Machines Compute Free tier monthly dashboard docs usage track Compute pool
Railway Cron Scheduled jobs Free credits monthly dashboard docs usage track Job schedule
Wasmer Edge Edge compute Free tier monthly console docs usage track Edge fallback
Koyeb Compute Free services monthly dashboard docs usage track Compute fallback
Zeabur Compute Free credits monthly dashboard docs usage track Compute fallback
Northflank Compute Free tier monthly dashboard docs usage track Compute fallback
Uptime Kuma Health checks Self-hosted n/a repo docs setup note Self-hosted monitor

Live Map (Compose → Deploy → Observe)

Interactive view of pipeline stages. Toggle a stage to simulate failover and budget pressure.

Compose

Registryhealthy
Artifactshealthy
Secretshealthy

Deploy

Statichealthy
Edgehealthy
DNShealthy

Observe

Healthhealthy
Budgethealthy
Alertshealthy

Compose

Registry + artifacts + secrets.

Deploy

Static + edge + DNS pools.

Observe

Health + budget + trace.

Last deploy: unknown · Last check: unknown

Registry View (Provider-Agnostic)

Rendered from a simple registry manifest so the UI stays provider-agnostic.

Registry status: loading

Serverless GPU Matrix (Free + Cheap + Opportunistic)

Filter by VRAM target, budget, and stability class. This view separates fully-free from trial-credit and spot-market pathways.

Matches: 0
Provider Typical Path GPU / VRAM Mode Stability Entry Cost Provision Monitor
Google Colab Notebook burst T4/L4/A100 variants free stable-ish $0 open notebook session timer + GPU panel
Kaggle Notebooks Dataset + notebook T4/P100 class free stable-ish $0 new notebook usage quota + notebook logs
Saturn Cloud Monthly GPU hours T4/A10 options credits stable credit-gated launch workspace workspace meter + jobs
Vast.ai Marketplace spot A100/H100 up to 80GB spot spot ~$0.50+/h create instance instance uptime + preemptions
RunPod Serverless + pods A100 80GB/H100 options stable ~$0.90+/h launch pod pod stats + endpoint health
TensorDock Marketplace VM Consumer + datacenter mix spot spot ~$0.20+/h find host host availability + ping
Thunder Compute On-demand GPU cloud A100 80GB stable ~$0.78/h request node GPU util + storage meter
Lambda Labs AI cloud instances A100/H100 options stable ~$1.20+/h create instance instance + storage + network
Google Cloud (credits) Trial/startup credits A100/H100 in regions credits stable credit-backed provision VM Cloud Monitoring + billing alerts
Azure (credits) Trial/startup credit path A100 80GB options credits stable credit-backed create VM Azure Monitor + cost alerts
Oracle Cloud (trial) Trial tenancy path GPU shapes by region credits stable credit-backed launch instance OCI metrics + budget alarms
JarvisLabs Burst rental High-memory GPU options stable ~$0.39+/h start machine runtime + utilization

Reality check: “fully free + consistently available + 64GB+ VRAM” is rare. Practical path is free notebooks → marketplace spot → credit-backed high-memory nodes.

Worked Tutorial Notebook: ML on 64GB+ Serverless GPU

Use this as a reproducible lab sequence. It is structured as request → provision → validate → train → monitor → checkpoint → teardown.

Notebook Cells

# 0) env bootstrap python -m venv .venv source .venv/bin/activate pip install -U pip torch torchvision torchaudio pynvml wandb datasets transformers accelerate # 1) verify GPU + VRAM python - <<'PY' import torch print("cuda:", torch.cuda.is_available()) if torch.cuda.is_available(): p = torch.cuda.get_device_properties(0) print("name:", p.name) print("vram_gb:", round(p.total_memory / (1024**3), 2)) PY # 2) train skeleton accelerate launch train.py --batch-size 2 --gradient-accumulation-steps 16 --mixed-precision bf16 # 3) checkpoint + artifact tar -czf run-artifacts.tgz checkpoints logs metrics.json

Provision + Monitoring Checklist

Pipeline Loop Simulator

Simulated stage cycle (seconds-scale) to mirror real run cadence.

Acquire Node0%
Bootstrap Env0%
Load Data0%
Train + Eval0%
Checkpoint + Upload0%
State: idle

Diversify

Live Demo

Adjust usage to see threshold alerts. This is a local demo that models quota drift.

Provider: GitHub Actions
Quota: 3000 min/mo
Usage: 0
Status: track
Thresholds: 70% warn · 90% critical

Failover Architecture (DNS + L7)

Providers are abstracted behind capability contracts. DNS handles coarse failover; L7 routing handles per-route steering and retries.

DNS pool (coarse)
clients → primary DNS answer → edge A → static/API
    ↳ if unhealthy or over-budget → DNS promotes edge B/C

L7 pool (fine)
edge router → /static/** → cf-pages → gh-pages → netlify
    → /api/** → cf-workers → netlify-edge → vercel-edge
    → /assets/** → r2 → gh-releases → b2

Policies: health-first routing, single retry on 5xx/timeout, budget-aware shift at 70%/90% quota.

Provider Pools (3+ per Layer)

Layer Primary Secondary Tertiary Quaternary
DNS Cloudflare HE DNS Porkbun Dynu
Edge / L7 Cloudflare Workers Netlify Edge Vercel Edge Fastly Edge
Static Cloudflare Pages GitHub Pages Netlify Vercel
Object Cloudflare R2 GitHub Releases Backblaze B2 Supabase Storage
CI / Build GitHub Actions GitLab CI CircleCI SourceHut
Health UptimeRobot Better Stack Grafana Cloud Sentry

Provider-Abstraction Contract

service-registry.yaml
services:
  plates:
    route: /plates/**
    capability: static
    dns_pool: [cf-dns, he-dns, porkbun-dns, dynu-dns]
    l7_pool: [cf-pages, gh-pages, netlify, vercel]
  api:
    route: /api/**
    capability: edge
    dns_pool: [cf-dns, he-dns, porkbun-dns, dynu-dns]
    l7_pool: [cf-workers, netlify-edge, vercel-edge, fastly-edge]

Adapters implement: provision, status, route, budget. Failover is driven by health checks and quota thresholds.

Policies

Next Actions