Overview

The ops.json file allows you to declaratively define configuration for your services. In your service configuration ops.json, you can define
  • Cloud resources that your service requires. When you deploy your service, LocalOps automatically provisions these resources for you.
  • Cron services
  • Sidecar containers (soon)

How it Works

  1. Add an ops.json file to your Github repository
  2. Declare configuration for your service in ops.json
  3. Commit and push to your repo. LocalOps will trigger a deployment while honoring ops.json configuration.

Order of operations run from ops.json

Here is the order in which resources/declarations defined in ops.json are provisioned during a deployment:
  1. Dependencies defined in dependencies are provisioned first. See here
  2. Dependencies defined in previews.dependencies are provisioned in the order defined. See here
  3. Jobs defined in init array are executed in order. These jobs must start and finish one after the other successfully. See here
  4. Main service is run.
  5. Cron jobs defined in cron are started. See here
  6. Workers in previews.workers are started. See here
#2 and #3 are run only in Pull request preview sevices.

Init Jobs

Applicable and run only for a web service. Say you want to run a db migration before a service is started. Just add the job inside init key like below:
{
    "init": [
        {
            "id": "migrate",
            "prefix": "migrate",
            "cmd": "python3 manage.py migrate",
            "env": {
                "var": "value"
            }
        }
    ]
}
Configuration for each init job:
  1. id - Unique identifier to identify this resource within the environment
  2. prefix - String that will be used as prefix in the name of the resource
  3. image - Image name with tag/version. Eg., nginx:1.29. This is optional. If the image is not provided, we use the same image that was used to run your original service, which was built with the latest commit in your branch/PR.
  4. cmd - start command. Used on the image you provide in this config or the image LocalOps built using latest commit in your branch/PR.
  5. env - Pass any environment variable necessary to run this image. In addition, when you don’t provide an image in this config, the image LocalOps built using latest commit in the PR will be used and all the secrets you added for this service in LocalOps console UI will also be injected into the job.
Changing id or prefix will swap the resource with a new one in the next deployment.
Note that init is an array. So you can add any number of these jobs. They will be run in the order you have declared here.
{
    "init": [
        {
            "id": "migrate",
            "prefix": "migrate",
            "cmd": "python3 manage.py migrate",
            "env": {
                "var": "value"
            }
        },
        {
            "id": "seeddb",
            "prefix": "seeddb",
            "cmd": "python3 manage.py seed",
            "env": {
                "var": "value"
            }
        },
        {
            "id": "adhoc",
            "prefix": "adhoc",
            "image": "alpine/curl:8.14.1",
            "cmd": "curl -fsSL https://www.google.com/",
            "env": {
                "var": "value"
            }
        }
    ]
}

Environment variables

Environment variables passed here are not the only ones passed into the job. Following set are injected into the job automatically, in the order mentioned here:
  1. Any variable you have added in LocalOps console UI are passed to the job.
  2. In addition, any cloud dependencies > exports variables you declared using below and
  3. Other preview service previews.depdendencies > exports variables are added too.
  4. Variables in env above init job config
Variables defined in the set #4 overrides the ones defined in say, #3 above.

Order and errors

Jobs are run in the order defined in the array. If any of the init jobs fail to complete successfully, deployment fails and the main web service won’t start.

Automatic provisioning of cloud resources

Your service may need a S3 bucket to store files, RDS instance to host your Postgres/MySQL database or Redis database for caching, background jobs, etc., You can declare these dependencies once in ops.json. LocalOps will provision these resources and ensure they are active & enabled for your service to use.

How to provision a cloud resource

In your ops.json, write a key called dependencies and fill up your dependency in the following format.
{
  "dependencies": {
    "<cloud-service-type>": {
      "<cloud-resource-type>": [
        {
          "id": "<give-unique-id>",
          "prefix": "<give-prefix-for-naming>",
          //"...any other resource specific config",
          "exports": {
            "ENV-VAR-KEY": "$resource-property",
            "ENV-VAR-KEY2": "$resource-property2"
          }
        }
      ]
    }
  }
}
You can define any number of cloud resources (say, buckets) for a given cloud resource type (S3). Here is an example:
{
  "dependencies": {
    "s3": {
      "buckets": [
        {
          "id": "attachments",
          "prefix": "attachments",
          "exports": {
            "ATTACHMENTS_BUCKET_ARN": "$arn",
            "ATTACHMENTS_BUCKET_NAME": "$name"
          },
          "skip_preview": false,
          "preview_only": false
        }
      ]
    }
  }
}

ID (id)

Specify a unique alpha numeric string that is unique amongst all the other resouces of same resource type (say S3). LocalOps uses this to uniquely identify the resource, provision and manage it on your behalf in your cloud. If you change the ID (id), the resource will be deleted and replaced by any other new one.

Prefix (prefix)

You can provide a prefix string to use in the name of all resources. In the above case, bucket created will have a name like attachmentsbucket-23490823940. If you change prefix string, the resource will be deleted and replaced by any other new one.

Environment Variables (exports)

LocalOps automatically injects environment variables for each provisioned resource to your service contaienrs. You can define the environment variable keys and specify the properties of the cloud resource to fill it up. So in above example, LocalOps will form an environment variable by the name ATTACHMENTS_BUCKET_ARN and pass ARN of the new S3 bucket to it. So that in your code, you can use it like:
console.log("bucket arn:", process.env.ATTACHMENTS_BUCKET_ARN)

Skip preview (skip_preview)

The specific resource isn’t considered for preview services / ephemeral environments if this is true.

Preview only (preview_only)

The specific resource is considered only for preview services / ephemeral environments if this is true.

Overall Lifecycle

Resources are read from ops.json when the service is being deployed.
  1. If the resource dependency with the given id is seen for the first time, it is provisioned.
  2. If the resource dependency with the an existing id is missing, it is deleted automatically during the next service deployment .
  3. If any of the resource dependency properties are modified, they are updated automatically during the next service deployment.

Accessing cloud resources

Your code can access these resources using the environment variables defined in ops.json. For example, you can add files/objects to the S3 bucket you defined in ops.json (see above), for example:
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';

const s3 = new S3Client({ region: process.env.AWS_REGION });

const upload = async () => {
  await s3.send(
    new PutObjectCommand({
      Bucket: process.env.ATTACHMENTS_BUCKET_NAME, // NOTE! This is the same env var you defined in ops.json
      Key: 'hello.txt',
      Body: 'Hello there!',
      ContentType: 'text/plain',
    })
  );
  console.log('✅ Uploaded!');
};

upload().catch(console.error);

Automatic role based access

In AWS based environments you spin up, every container (your code) runs with a pre-configured IAM role that provides implicit, authenticated, key-less access to any cloud resource (say S3). You can see the name of pre-configured IAM role in environment’s dashboard. Any IAM policy can be added to the pre-configured role to give your code appropriate access to cloud resources. For the resources defined in ops.json, LocalOps automatically generates necessary IAM permissions and add them as policies in the pre-conigured IAM role. Your code can just access the resources defined in ops.json by name and all calls made will be authenticated calls by default. For example, for the S3 bucket defined in ops.json, LocalOps automatically adds the following necessary IAM permissions in the pre-configured IAM role.
  • Read and write objects
  • List bucket contents
{
  "Effect": "Allow",
  "Action": [
    "s3:GetObject",
    "s3:PutObject",
    "s3:DeleteObject",
    "s3:ListBucket"
  ],
  "Resource": [
    "arn:aws:s3:::your-bucket-name",
    "arn:aws:s3:::your-bucket-name/*"
  ]
}

Supported cloud resources

As of now, you can define the following dependencies in ops.json. We plan to grow this list to cover other services in AWS. Please get in touch with us at help@localops.co if you need anything specific.
  1. Amazon S3 buckets - for static object storage (files, pictures, etc.,). Learn more
  2. Amazon RDS Instances - for hosting Postgres, MySQL databases. Learn more
  3. Amazon Elastic cache clusters - for caching, queues, pub/sub. Learn more
  4. Amazon SNS topics - for pub/sub capabilities. Learn more
  5. Amazon SQS queues - for queueing (standard/FIFO) capabilities. Learn more

Configured for production use, by default

All cloud resources are configured with production-grade security:
  • Encryption at rest: Enabled by default, for all data storage services (S3, RDS, Elastic Cache)
  • Network isolation: Resources are deployed within your environment’s VPC wherever applicable.
  • Private only: Resources are deployed for private access only, from within your environment’s VPC wherever applicable.
  • Access control: Least-privilege access is configured by default for applicable resources, using fine-grained IAM policies and IAM role.
  • High availability: Multi AZ setup is supported in configuration and can be turned on when needed
  • Monitoring: Cloudwatch monitoring is turned on by default for applicable cloud resources. These metrics and logs can be seen inside out built-in monitoring dashboard.
  • Backups tuned on: Backups are turned ON for all applicable data storage services (RDS, Elastic Cache) with 30-day retention.

Tuned For Ephemeral Preview environments

LocalOps can spin up ephemeral preview environments for your service, for each Github pull request to the configured target branch of the service. Configuration for the supported cloud resources are tuned differently for these preview environments. To ensure
  • preview environments come up fast
  • preview enviromments don’t incur a lot of costs (say in the case of RDS)
Following configurations are disabled in Pull request preview environments:
  1. Backups
  2. Cloudwatch monitoring
  3. Multi-AZ setup
  4. Encryption

Ephemeral databases and services for pull request previews

LocalOps can spin up a preview environment / preview service / ephemeral copy for your service when a pull request is raised at the configured target branch of a service. For each of these preview services, you can spin up isolated set of databases, caches, queues and workers that are created and deleted when the PR is opened and closed/merged. Following resources can be created this way:
  1. Databases - Postgres / MySQL
  2. Cache services - Redis / Memcache
  3. Queues - RabbitMQ
  4. Workers - using your code or any public docker image
Each of these can be declared under a top-level key called previews, like below:
{
    "previews": {
        "dependencies": {
            "db": [
                {
                    "id": "db",
                    "prefix": "db",
                    "engine": "postgres",
                    "version": "17.5",
                    "exports": {
                        "DB_HOST": "$host",
                        "DB_PORT": "$port",
                        "DB_USER": "$user",
                        "DB_NAME": "$db",
                        "DB_PASS": "$pass"
                    }
                }
            ],
            "cache": [
                {
                    "id": "redis",
                    "prefix": "redis",
                    "engine": "redis",
                    "version": "8.0.3",
                    "exports": {
                        "REDIS_HOST": "$host",
                        "REDIS_PORT": "$port"
                    }
                }
            ],
            "queues": [
                {
                    "id": "rabbitmq",
                    "prefix": "rabbitmq",
                    "engine": "rabbitmq",
                    "version": "4",
                    "exports": {
                        "QUEUE_HOST": "$host",
                        "QUEUE_PORT": "$port",
                        "QUEUE_VHOST": "$vhost",
                        "QUEUE_USER": "$user",
                        "QUEUE_PASS": "$pass"
                    }
                }
            ]
        },
        "workers": [
            {
                "id": "worker",
                "prefix": "worker",
                "cmd": "node server.js",
                "env": {
                    "variable": "value"
                },
                "count": 2
            }
        ]
    }
}

Isolated databases

You can spin up any number of these databases for each PR, with any publicly available version:
  1. Postgres
  2. Mysql
LocalOps automatically provisions, configures and runs these databases in your environment, just by declaring the name, database and version in your ops.json. In the ops.json, just define a key called db under previews > dependencies like below and add your database dependencies as an array. Example:
{
  "previews": {
    "dependencies": {
      "db": [
        {
          "id": "db1",
          "prefix": "db1",
          "engine": "postgres",
          "version": "17.5",
          "exports": {
            "DB1_HOST": "$host",
            "DB1_PORT": "$port",
            "DB1_USER": "$user",
            "DB1_NAME": "$db",
            "DB1_PASS": "$pass"
          }
        },
        {
          "id": "db2",
          "prefix": "db2",
          "engine": "postgres",
          "version": "17.5",
          "exports": {
            "DB2_HOST": "$host",
            "DB2_PORT": "$port",
            "DB2_USER": "$user",
            "DB2_NAME": "$db",
            "DB2_PASS": "$pass"
          }
        }
      ]
    }
  }
}
Configuration for each db:
  1. id - Unique identifier to identify this resource within the environment
  2. prefix - String that will be used as prefix in the name of the resource
  3. engine - mysql or postgres
  4. version - any public version of mysql or postgres available in docker hub
  5. exports - Information of the database resource can be exported to be used within your service. Define a set of key value pairs that will be injected as environment variables in your service. Keys will be injected as such. Values are interpreted as follows.
Export values:
  1. $host - DNS host of the resource, privately reachable within the cluster. All resources are private.
  2. $port - DNS port of the resource. For eg., you will see $port set as 3306 for MySQL and 5432 for Postgres.
  3. $db - Database name to use from your code to read/write data. LocalOps creates a default database for you when provisioning the database server.
  4. $user - Username to use while authenticating into the database. LocalOps creates a default user for you when provisioning the database server and you can use this variable to get its value injected in your environment variable.
  5. $pass - Password to use while authenticating into the database. LocalOps creates this for you and passes it to you via this var.
Changing id or prefix will swap the resource with a new one in the next deployment.
All databases are ephemeral databases only. No persistent storage is configured, as they are created for previewing pull requests only.

Isolated cache servers

You can spin up any number of these caching servers for each PR, with any publicly available version:
  1. Redis
  2. Memcache
LocalOps automatically provisions, configures and runs these servers in your environment, just by declaring the name, engine and version in your ops.json. In the ops.json, just define a key called cache under previews > dependencies like below and add your caching dependencies as an array. Example:
{
  "previews": {
    "dependencies": {
      "cache": [
        {
          "id": "redis",
          "prefix": "redis",
          "engine": "redis",
          "version": "8.0.3",
          "exports": {
            "REDIS_HOST": "$host",
            "REDIS_PORT": "$port"
          }
        }
      ]
    }
  }
}
Configuration for each cache:
  1. id - Unique identifier to identify this resource within the environment
  2. prefix - String that will be used as prefix in the name of the resource
  3. engine - redis or memcache
  4. version - any public version of redis or memcache available in docker hub
  5. exports - Information of the caching server can be exported to be used within your service. Define a set of key value pairs that will be injected as environment variables in your service. Keys will be injected as such. Values are interpreted as follows.
Export values:
  1. $host - DNS host of the resource, privately reachable within the cluster. All resources are private.
  2. $port - DNS port of the resource. For eg., you will see $port set as 6379 for Redis and 11211 for Memcache.
Changing id or prefix will swap the resource with a new worker in the next deployment.
All caching servers are ephemeral services only. No persistent storage is configured, as they are created for previewing pull requests only.

Isolated queue/messaging servers

You can spin up any number of these queueing/messaging servers for each PR, with any publicly available version:
  • RabbitMQ
LocalOps automatically provisions, configures and runs these servers in your environment, just by declaring the name, engine and version in your ops.json. In the ops.json, just define a key called queues under previews > dependencies like below and add your dependencies as an array. Example:
{
  "previews": {
    "dependencies": {
      "queues": [
        {
          "id": "rabbitmq",
          "prefix": "rabbitmq",
          "engine": "rabbitmq",
          "version": "4.1.2",
          "exports": {
            "QUEUE_HOST": "$host",
            "QUEUE_PORT": "$port",
            "QUEUE_VHOST": "$vhost",
            "QUEUE_USER": "$user",
            "QUEUE_PASS": "$pass"
          }
        }
      ]
    }
  }
}
Configuration for each cache:
  1. id - Unique identifier to identify this resource within the environment
  2. prefix - String that will be used as prefix in the name of the resource
  3. engine - rabbitmq
  4. version - any public version of rabbitmq available in docker hub
  5. exports - Information of the messaging/queueing server can be exported to be used within your service. Define a set of key value pairs that will be injected as environment variables in your service. Keys will be injected as such. Values are interpreted as follows.
Export values:
  1. $host - DNS host of the resource, privately reachable within the cluster. All resources are private.
  2. $port - DNS port of the resource. For eg., you will see $port set as 5672 for RabbitMQ.
  3. $vhost - Virtual host of rabbitmq that is created for you to use.
  4. $user - Username to use while authenticating into this server. LocalOps creates a default user for you when provisioning the RabbitMQ server and you can use this variable to get its value injected in your environment variable.
  5. $pass - Password to use while authenticating into this server. LocalOps creates this for you and passes it to you via this variable.
Get in touch with us at help@localops.co if you want more queueing servers supported.
Changing id or prefix will swap the resource with a new one in the next deployment.
All queueing servers are ephemeral services only. No persistent storage is configured, as they are created for previewing pull requests only.

Isolated workers

You can spin up any number of workers for each PR using one of these methods:
  • Run any publicly available docker image from Docker hub
  • Run your existing code from the PR but with a different start command
In the ops.json, just define a key called workers under previews like below and add your workers as an array. Example:
{
  "previews": {
    "workers": [
      {
        "id": "worker",
        "prefix": "worker",
        "image": "nginx:1.29",
        "cmd": "node server.js",
        "count": 2,
        "env": {
          "variable": "value"
        }
      }
    ]
  }
}
Configuration for each worker:
  1. id - Unique identifier to identify this resource within the environment
  2. prefix - String that will be used as prefix in the name of the resource
  3. image - Image name with tag/version. Eg., nginx:1.29. This is optional. If the image is not provided, we use the same image that was used to run your original service, which was built with the latest commit in your PR.
  4. cmd - start command. Used on the image you provide in this config or the image LocalOps built using latest commit in your PR.
  5. env - Pass any environment variable necessary to run this image. In addition, when you don’t provide an image in this config, the image LocalOps built using latest commit in the PR will be used and all the secrets you added for this service in LocalOps console UI will also be injected into the worker.
  6. count - Number of replicas / containers of the worker to run
Changing id or prefix will swap the resource with a new one in the next deployment.

Configuring Cron service

It is a common pattern in many codebases and communities, to define cron logic inside HTTP request handlers and call those HTTP endpoints on a specific schedule. You can do the same here in ops.json.
{
  "cron": {
    "cron_jobs": [
      {
        "schedule": "*/2 * * * *",
        "path": "/api/v1/do-every-2-mins"
      },
      {
        "schedule": "0 0 * * *",
        "path": "/api/v1/do-every-morning"
      }
    ]
  }
}

Cron arguments

Define cron key in ops.json and add any number of scheduled hits by giving
  • schedule: Specify a spechedule using the same powerful crontab syntax. Refer here on how to define this.
  • path: Specify a path to hit. Your service’s internal HOST is known at run time and hence you don’t have to define the URL like https://<host>/path.
LocalOps spins up an implicit cron service within your environment, to make the API calls to your service using its private Internal host name, as per the above schedule. Crons only work if your service is either an “external/web serivce” or “internal” service. So that there are endpoints available to hit based on a schedule.

Complete Example

Here’s a comprehensive ops.json example:
{
  "init": [
    {
      "id": "migrate",
      "prefix": "migrate",
      "image": "nginx:1.29", //optional
      "cmd": "python3 manage.py migrate",
      "env": {
        "var": "migrate"
      }
    },
    {
      "id": "seeddb",
      "prefix": "seeddb",
      "image": "nginx:stable-bookworm",
      "cmd": "python3 manage.py seed",
      "env": {
        "var": "seed"
      }
    }
  ],
  "cron": {
    "cron_jobs": [
      {
        "schedule": "*/2 * * * *",
        "path": "/api/v1/sync-user-metrics"
      }
    ]
  },
  "dependencies": {
    "s3": {
      "buckets": [
        {
          "id": "user-metrics-bucket",
          "prefix": "metrics-data",
          "exports": {
            "MY_BUCKET_NAME1": "$name",
            "MY_BUCKET_ARN1": "$arn"
          }
        }
      ]
    },
    "sns": {
      "topics": [
        {
          "id": "user-alerts-topic",
          "prefix": "user-alerts",
          "exports": {
            "MY_SNS_TOPIC_NAME1": "$name",
            "MY_SNS_TOPIC_ARN1": "$arn"
          }
        }
      ]
    },
    "sqs": {
      "queues": [
        {
          "id": "metrics-queue",
          "prefix": "metrics-events",
          "exports": {
            "MY_SQS_QUEUE_NAME1": "$name",
            "MY_SQS_QUEUE_ARN1": "$arn"
          }
        }
      ]
    },
    "rds": {
      "instances": [
        {
          "id": "user-data-db",
          "prefix": "userdata",
          "engine": "postgres",
          "version": "17.5",
          "storage_gb": 10,
          "instance_type": "db.t4g.small",
          "publicly_accessible": false,
          "exports": {
            "MY_RDS_INSTANCE_NAME": "$name",
            "MY_RDS_INSTANCE_ARN": "$arn",
            "MY_RDS_INSTANCE_ENDPOINT": "$endpoint",
            "MY_RDS_INSTANCE_ADDRESS": "$address",
            "MY_RDS_INSTANCE_USERNAME": "$username",
            "MY_RDS_INSTANCE_PASSWORD_ARN": "$passwordArn",
            "MY_RDS_INSTANCE_DB_NAME": "$dbName"
          }
        }
      ]
    },
    "elasticache": {
      "clusters": [
        {
          "id": "session-cache",
          "prefix": "user-session",
          "engine": "redis",
          "version": "7.0",
          "instance_type": "cache.t4g.small",
          "num_nodes": 1,
          "exports": {
            "MY_ELASTICACHE_CLUSTER_NAME": "$name",
            "MY_ELASTICACHE_CLUSTER_ARN": "$arn",
            "MY_ELASTICACHE_CLUSTER_ENDPOINT": "$endpoint",
            "MY_ELASTICACHE_CLUSTER_ADDRESS": "$address",
            "MY_ELASTICACHE_CLUSTER_PORT": "$port"
          }
        }
      ]
    }
  },
  "previews": {
    "dependencies": {
      "db": [
        {
          "id": "db",
          "prefix": "db",
          "engine": "postgres",
          "version": "17.5",
          "exports": {
            "DB_HOST": "$host",
            "DB_PORT": "$port",
            "DB_USER": "$user",
            "DB_NAME": "$db",
            "DB_PASS": "$pass"
          }
        }
      ],
      "cache": [
        {
          "id": "redis",
          "prefix": "redis",
          "engine": "redis",
          "version": "8.0.3",
          "exports": {
            "REDIS_HOST": "$host",
            "REDIS_PORT": "$port"
          }
        }
      ],
      "queues": [
        {
          "id": "rabbitmq",
          "prefix": "rabbitmq",
          "engine": "rabbitmq",
          "version": "4",
          "exports": {
            "QUEUE_HOST": "$host",
            "QUEUE_PORT": "$port",
            "QUEUE_VHOST": "$vhost",
            "QUEUE_USER": "$user",
            "QUEUE_PASS": "$pass"
          }
        }
      ]
    },
    "workers": [
      {
        "id": "worker",
        "prefix": "worker",
        "cmd": "node server.js",
        "env": {
          "variable": "value"
        },
        "count": 2
      }
    ]
  }
}