Skip to main content
Migrating from Fly.io to AWS gives you access to the full AWS ecosystem, better regional availability, and predictable pricing. LocalOps provides a similar developer experience with git-push deployments and automatic scaling.
White-glove migration: Our engineers will migrate your Fly.io app to AWS. Schedule a migration call and we’ll handle everything for you.

What you get after migration

  • Same developer experience: Push to deploy, just like Fly.io
  • Predictable costs: No surprise bills—AWS pricing you control
  • Full AWS access: Use any AWS service (RDS, ElastiCache, S3, SQS, etc.)
  • Production-ready: Auto-scaling, auto-healing, monitoring, and CI/CD out of the box
  • Built-in observability: Open-source stack with Prometheus, Loki, and Grafana—no extra cost
  • No vendor lock-in: Your code runs on standard Kubernetes in your own AWS account

Migration overview

Set up LocalOps environment

Connect your AWS account and create a new environment for your app.

Deploy your application

Connect your GitHub repo and deploy your app to LocalOps.

Migrate Postgres database

Export your Fly Postgres data and import it into Amazon RDS.

Update DNS and go live

Point your domain to the new environment and verify everything works.
Need help? Schedule a migration call and our engineers will assist you through the entire process.

Step 1: Set up LocalOps environment

Before migrating, you need a LocalOps environment running on AWS.

Create LocalOps account

Sign up for LocalOps if you haven’t already.

Connect AWS account

Follow the AWS connection guide to connect your AWS account.

Create environment

Create a new environment (e.g., production) in your preferred AWS region. See Create new environment.

Create service

Create a new service and connect your GitHub repository. See Create new service.
Once your environment is ready, note down the VPC ID and Private Subnet IDs from the environment overview page. You’ll need these to create your RDS database.

Step 2: Prepare your application

Dockerfile-based apps

If you have a Dockerfile, LocalOps will use it automatically. No changes needed.

fly.toml configuration

Your fly.toml configuration maps to LocalOps concepts:
fly.tomlLocalOps Equivalent
[http_service]Web service
[processes]Workers
[[services]]Internal service
[env]Environment variables in service settings
[[vm]] sizeResource configuration in service settings

Health checks

Convert Fly.io health checks to LocalOps format in ops.json:
{
  "healthchecks": [
    {
      "type": "http",
      "path": "/health",
      "port": 8080,
      "interval": 30,
      "timeout": 5
    }
  ]
}
See ops.json documentation for all health check options.

Step 3: Migrate Fly Postgres to Amazon RDS

3.1 Create a backup of your Fly Postgres database

Use flyctl to connect to your Postgres cluster and create a backup:
# Connect to your Fly Postgres app
flyctl proxy 5432 -a your-postgres-app-name

# In another terminal, create a backup
pg_dump "postgres://postgres:your-password@localhost:5432/your-database" \
  --format=custom \
  --no-owner \
  --no-acl \
  --file=fly_backup.dump
Alternatively, use flyctl ssh console to access the database directly:
# SSH into the Postgres machine
flyctl ssh console -a your-postgres-app-name

# Create backup inside the machine
pg_dump -U postgres -d your-database -F c -f /tmp/backup.dump

# Copy backup to local machine (from another terminal)
flyctl sftp get /tmp/backup.dump -a your-postgres-app-name
For large databases, the backup and restore process may take significant time. Plan for a maintenance window if you need zero data loss during migration.

3.2 Create RDS database in your LocalOps environment VPC

Add an ops.json file to the root of your repository:
{
  "dependencies": {
    "rds": {
      "instances": [
        {
          "id": "main-db",
          "prefix": "myapp",
          "engine": "postgres",
          "version": "16.4",
          "storage_gb": 20,
          "instance_type": "db.t4g.small",
          "publicly_accessible": false,
          "exports": {
            "DATABASE_HOST": "$address",
            "DATABASE_NAME": "$dbName",
            "DATABASE_USER": "$username",
            "DATABASE_PASSWORD_ARN": "$passwordArn"
          }
        }
      ]
    }
  }
}
Deploy your service to provision the RDS instance automatically. See RDS documentation for all configuration options.

Option B: Manual RDS creation

  1. Login to AWS Console in the same region as your LocalOps environment
  2. Create a DB Subnet Group:
    • Navigate to RDS > Subnet groups > Create DB subnet group
    • Select the VPC ID from your LocalOps environment
    • Add the private subnet IDs from your LocalOps environment
  3. Create RDS Instance:
    • Navigate to RDS > Create database
    • Choose PostgreSQL and select the same major version as your Fly database
    • Select the DB subnet group you created
    • Set Publicly accessible to No
    • Create a security group allowing port 5432 from 10.0.0.0/16

3.3 Restore backup to Amazon RDS

# From an EC2 instance in the same VPC
sudo dnf install postgresql16 -y  # Amazon Linux 2023

# Upload backup
scp fly_backup.dump ec2-user@your-ec2-ip:~/

# Restore
pg_restore --verbose --no-owner --no-acl \
  -h your-rds-endpoint.rds.amazonaws.com \
  -U your-db-username \
  -d your-database-name \
  fly_backup.dump

3.4 Configure database credentials

If you manually created the RDS instance, add credentials as secrets:
  1. Navigate to your service in the LocalOps console
  2. Go to Settings > Secrets
  3. Add the following secrets:
DATABASE_HOST=your-rds-endpoint.rds.amazonaws.com
DATABASE_NAME=your-database-name
DATABASE_USER=your-db-username
DATABASE_PASSWORD=your-db-password
If you used ops.json to create RDS, the password is stored in AWS Secrets Manager. Use the AWS SDK to retrieve it using the DATABASE_PASSWORD_ARN environment variable.

Step 4: Migrate Fly Redis (Upstash) to ElastiCache

If you’re using Upstash Redis on Fly.io, migrate to Amazon ElastiCache:
{
  "dependencies": {
    "elasticache": {
      "clusters": [
        {
          "id": "cache",
          "prefix": "myapp",
          "engine": "redis",
          "node_type": "cache.t4g.micro",
          "exports": {
            "REDIS_HOST": "$endpoint",
            "REDIS_PORT": "$port"
          }
        }
      ]
    }
  }
}
Update your connection code:
// Before (Upstash on Fly.io)
import { Redis } from '@upstash/redis';
const redis = new Redis({
  url: process.env.UPSTASH_REDIS_REST_URL,
  token: process.env.UPSTASH_REDIS_REST_TOKEN,
});

// After (ElastiCache)
import Redis from 'ioredis';
const redis = new Redis({
  host: process.env.REDIS_HOST,
  port: process.env.REDIS_PORT,
});
See ElastiCache documentation for more details.

Step 5: Migrate environment variables and secrets

Export secrets from Fly.io and add them to LocalOps:
# List all secrets (names only)
flyctl secrets list -a your-app-name
For each secret:
  1. Get the value from your local environment or source
  2. In LocalOps console, navigate to your service > Settings > Secrets
  3. Add each variable
See Secrets documentation for more details.

Step 6: Update your application

Update your application to use the new environment variables:
// Before (Fly.io DATABASE_URL)
const connectionString = process.env.DATABASE_URL;

// After (LocalOps)
const connectionString = `postgresql://${process.env.DATABASE_USER}:${password}@${process.env.DATABASE_HOST}/${process.env.DATABASE_NAME}`;

Step 7: Deploy and verify

  1. Push your changes to trigger a deployment
  2. Check logs in the LocalOps console to verify the application starts correctly
  3. Test your application endpoints
  4. Update your DNS to point to the new LocalOps environment
See Custom domain setup for DNS configuration.

Built-in observability

Every LocalOps environment comes with a fully integrated open-source observability stack—no extra add-ons required.

Prometheus + Grafana for metrics

Prometheus automatically collects CPU, memory, disk, and network metrics from every node running your application. View and analyze metrics through pre-built Grafana dashboards, accessible from the Monitoring tab in your environment. You can filter and group metrics by:
  • Node
  • Pod
  • Deployment
  • Service
  • Namespace

Loki + Grafana for logs

Loki automatically collects all logs from STDOUT and STDERR across your services. No log drain configuration needed—just print to console and your logs are captured. Access logs through the same Grafana dashboard, with powerful filtering by Kubernetes namespace, deployment, or custom labels.

Custom dashboards

Each environment gets its own Grafana instance with pre-built dashboards for infrastructure monitoring. You can create custom dashboards to visualize application-specific metrics and logs.
Learn more about logs, metrics, and alerts.

Migrating Fly.io features

Fly.io FeatureLocalOps Equivalent
Machines (web)Web service
Machines (worker)Workers
Fly PostgresAmazon RDS
Upstash RedisAmazon ElastiCache
Tigris Object StorageAmazon S3
Fly MetricsBuilt-in metrics
Fly LogsBuilt-in logging

Get help with your migration

White-glove migration: Don’t want to do this yourself? Our engineers will migrate your entire Fly.io setup to AWS—including database migration, environment variables, and custom domains. Schedule a migration call now.
Have questions? Email us at [email protected].