All posts
9 min read

What Managing Secrets Actually Looks Like With Four Deployment Platforms

guidesenv-varsworkflowvercelconvexrailwaysupabase

Here's a stack that isn't unusual: Vercel for the Next.js frontend, Convex for the backend, Railway for a background worker, and Supabase for auth and storage. Four platforms, each with its own dashboard, its own concept of environments, and its own API for managing secrets.

You have maybe fifteen environment variables. Some are shared across all four platforms. Some only matter for one or two. And you need test values for development and live values for production.

This is the post where we stop hand-waving about "just use a secrets manager" and walk through what it actually looks like to set this up, maintain it, and not lose your mind.

The part nobody warns you about: environments aren't universal

Every platform has its own environment model, and none of them agree:

  • Vercel has three environments: development, preview, and production. Preview is tied to branch deploys.
  • Convex scopes secrets per deployment. You have a dev deployment and a prod deployment, each with its own set.
  • Railway has production and staging as environment labels on a per-service level.
  • Supabase has one environment per project. You create separate projects for dev and prod.

When someone says "push secrets to staging," that maps to different things on each platform. And if you're managing this manually, you need a mental lookup table for every operation.

The environment mapping concept in dotenvy exists specifically to collapse this. You define two local environments -- test and live -- and map them to whatever each platform calls its equivalent:

targets:
  vercel:
    type: vercel
    project: my-app
    mapping:
      development: test
      preview: test
      production: live
  convex-dev:
    type: convex
    deployment: my-app-dev
    mapping:
      default: test
  convex-prod:
    type: convex
    deployment: my-app-prod
    mapping:
      default: live
  railway:
    type: railway
    project: abc123
    mapping:
      staging: test
      production: live
  supabase:
    type: supabase
    project_ref: xyzdev
    mapping:
      default: test

Now dotenvy sync test pushes your .env.test values to Vercel development, Vercel preview, Convex dev, Railway staging, and Supabase dev -- all at once. One command, one mental model, five remote environments updated. dotenvy sync live does the same for production.

Bootstrapping from an existing project

If you already have a .env file from an existing project, you don't need to set this up from scratch. The init command can scan your existing file and figure out which platforms you're using:

dotenvy init --from .env.local

This does three things: it reads the key names in your file and detects providers based on naming patterns (NEXT_PUBLIC_CONVEX_URL implies Convex, NEXT_PUBLIC_SUPABASE_URL implies Supabase, VERCEL_ prefixed vars imply Vercel), it copies all the values into a new .env.test file, and it pre-selects the detected providers in the interactive setup flow.

You still walk through the guided setup to fill in project IDs and deployment names, but the detection step saves you from listing out every platform manually. For a project with a dozen keys spread across four providers, it shaves off the most tedious part of onboarding.

Not every secret belongs everywhere

This is a subtlety that bites teams once their stack gets complex enough. Your STRIPE_SECRET_KEY needs to be on Vercel and Railway (where payment processing happens), but it has no business being in Convex or Supabase. Your CONVEX_DEPLOY_KEY should only go to Convex. Your NEXT_PUBLIC_POSTHOG_KEY is a frontend-only variable that only Vercel needs.

Pushing every secret to every platform is wasteful and increases your attack surface. If a provider is compromised, it should only have the secrets it actually needs.

dotenvy supports include and exclude patterns per target using glob syntax:

targets:
  vercel:
    type: vercel
    project: my-app
    include:
      - "STRIPE_*"
      - "NEXT_PUBLIC_*"
      - "DATABASE_URL"
    mapping:
      development: test
      preview: test
      production: live
  convex-dev:
    type: convex
    deployment: my-app-dev
    exclude:
      - "STRIPE_*"
      - "NEXT_PUBLIC_*"
    mapping:
      default: test
  railway:
    type: railway
    project: abc123
    include:
      - "STRIPE_*"
      - "DATABASE_URL"
      - "REDIS_URL"
    mapping:
      staging: test
      production: live

When you run dotenvy sync test, each target only receives the secrets that match its filter. The sync engine applies the patterns before making any API calls, so you get a clean diff and fewer unnecessary writes.

The dry-run as a sanity check

Before pushing anything, you can preview exactly what would change on every platform:

dotenvy sync test --dry-run

The output shows you a per-target breakdown with symbols for each secret:

Checking authentication...
  ✓ vercel
  ✓ convex-dev
  ✓ railway
  ✓ supabase

Source: .env.test
Environment: test

vercel → my-app/development
  + NEXT_PUBLIC_POSTHOG_KEY (new)
  ~ STRIPE_SECRET_KEY (changed)
  = 3 unchanged

convex-dev → my-app-dev/default
  + RESEND_API_KEY (new)
  = 4 unchanged

railway → abc123/staging
  = 3 unchanged

supabase → xyzdev/default
  ? SUPABASE_SERVICE_KEY (unknown)
  ? DATABASE_URL (unknown)

A few things to notice here. The + means a secret exists locally but not on the remote -- it will be added. The ~ means the local and remote values differ -- it will be updated. The = means they match -- it will be skipped. No unnecessary writes.

And then there's ? -- unknown. This is how dotenvy handles platforms that can't read secrets back through their API. Supabase and Fly.io are write-only: you can push values to them, but their APIs don't let you retrieve them. Instead of pretending to know the state, the diff shows unknown. The secret will be written on sync regardless, because there's no way to confirm whether it matches.

This is a small detail, but it matters. A tool that showed everything as "synced" when it can't actually verify would give you false confidence. Surfacing the unknown state makes the limitation visible so you can account for it.

The one-command shortcut

For the common case of adding a new secret and immediately deploying it everywhere, there's set:

dotenvy set RESEND_API_KEY=re_test_abc123

This does four things in one shot: adds RESEND_API_KEY to your schema in dotenvy.yaml (if it's not already there), writes the value to .env.test, syncs it to all targets mapped to the test environment, and (if you have an API key configured) logs the event to the audit trail.

For production:

dotenvy set RESEND_API_KEY=re_live_xyz789 --env live

Same flow, but writes to .env.live and pushes to production targets.

The alternative workflow -- editing the YAML, editing the .env file, then running sync -- works too. But set is what you reach for when you're in the middle of integrating a new service and want to get the key deployed without switching contexts.

Pulling secrets the other direction

Sometimes you need to go the other way. A teammate added a new secret directly in the Vercel dashboard, or you're setting up a new machine and need to bootstrap your local files from what's already deployed.

dotenvy pull vercel --env production -o .env.live

This fetches all tracked secrets from Vercel's production environment and writes them to your .env.live file. If there are secrets on the remote that aren't in your schema, dotenvy warns you about them and offers to add them. This auto-discovery means your config stays current even when secrets are added outside the normal workflow.

For write-only providers like Supabase, pull isn't available (there's nothing to read). But for Vercel, Convex, Railway, Render, and Netlify, it works both directions.

What the audit trail actually captures

If you configure an API key in dotenvy.yaml, every sync, set, and pull operation gets logged to the dotenvy dashboard:

api_key: dvy_proj_abc123

The CLI sends a lightweight event after each operation: which action was performed, which environment, which target, which secret names were involved, and who ran it (from the $USER environment variable). No secret values are transmitted -- only names and metadata.

This is fire-and-forget. The HTTP call runs with a short timeout and never blocks the CLI. If the dashboard is down or unreachable, the sync still completes normally. The audit trail is additive: it gives you visibility into who changed what and when, without creating a dependency that could break your workflow.

A realistic daily workflow

Here's what the day-to-day actually looks like once everything is configured:

Adding a new integration:

# Get the API key from the provider's dashboard
dotenvy set LOOPS_API_KEY=loops_test_xxx
dotenvy set LOOPS_API_KEY=loops_live_xxx --env live

Rotating a compromised key:

# Generate new key in Stripe dashboard
dotenvy set STRIPE_SECRET_KEY=sk_test_new --env test
dotenvy set STRIPE_SECRET_KEY=sk_live_new --env live
# Revoke old key in Stripe dashboard

Checking for drift before a deploy:

dotenvy sync live --dry-run

Onboarding a new developer:

# New dev clones repo, installs dotenvy, then:
dotenvy pull vercel --env development -o .env.test

Seeing what's configured:

dotenvy status

Which shows all secrets in the schema, every target with its auth status and environment mapping, and any include/exclude filters. If auth is broken for a provider, it tells you which environment variable to set.

The trust model tradeoff

dotenvy has no server that stores your secrets. Your .env.test and .env.live files are on your machine, and sync pushes values directly to each platform's API over HTTPS. The dotenvy.yaml config is safe to commit -- it contains secret names, project identifiers, and environment mappings, but never values.

This means there's no central breach target. An attacker would need to compromise your machine or one of the individual platforms, both of which are already in your threat model. The tradeoff is that there's no cloud backup of your secrets: if you lose your .env.live file and haven't pulled recently, you need to reconstruct it from the individual dashboards.

For most teams, that tradeoff is worth it. The platforms themselves are the authoritative store. Your local files are a working copy.

Where this breaks down

No tool solves every problem, and it's worth being honest about the edges:

  • Secrets that differ per platform by design (e.g., a WEBHOOK_URL that's different on Vercel vs Railway) don't fit the "one value synced everywhere" model. You'd handle these as separate secrets or manage them outside the sync workflow.
  • Write-only providers mean you can't verify state without logging into the dashboard. The unknown status in diffs is honest, but it means you're trusting that the sync succeeded.
  • Team coordination still requires communication. If two developers both set different values for the same key, the last sync wins. The audit trail shows what happened, but it doesn't prevent conflicts.

These are real limitations. For most projects -- especially early and mid-stage teams managing secrets across a handful of platforms -- the workflow described here handles the common cases well.

curl -fsSL https://dotenvy.dev/install.sh | sh