Overview
Theops.json
file allows you to declaratively define configuration for your services.
In your service configuration ops.json
, you can define
- Managed cloud depdendencies like S3
- Cron jobs
- Sidecar containers
- Init jobs
- more..
How it Works
- Add an
ops.json
file to your Github repository - Declare configuration for your service in
ops.json
- Commit and push to your repo. LocalOps will trigger a deployment while honoring
ops.json
configuration.
Order of operations run from ops.json
Here is the order in which resources/declarations defined in ops.json are provisioned during a deployment:
- Dependencies defined in
dependencies
are provisioned first. See here - Dependencies defined in
previews.dependencies
are provisioned in the order defined. See here - Jobs defined in
init
array are executed in order. These jobs must start and finish one after the other successfully. See here - Main service is run.
- Cron jobs defined in
cron
are started. See here - Workers in
previews.workers
are started. See here
Init Jobs
Applicable and run only for a web service. Say you want to run a db migration before a service is started. Just add the job insideinit
key like below:
init
job:
id
- Unique identifier to identify this resource within the environmentprefix
- String that will be used as prefix in the name of the resourceimage
- Image name with tag/version. Eg.,nginx:1.29
. This is optional. If theimage
is not provided, we use the same image that was used to run your original service, which was built with the latest commit in your branch/PR.cmd
- start command. Used on theimage
you provide in this config or the image LocalOps built using latest commit in your branch/PR.env
- Pass any environment variable necessary to run this image. In addition, when you don’t provide animage
in this config, the image LocalOps built using latest commit in the PR will be used and all the secrets you added for this service in LocalOps console UI will also be injected into the job.
Changing
id
or prefix
will swap the resource with a new one in the next deployment. init
is an array. So you can add any number of these jobs. They will be run in the order you have declared here.
Environment variables
Environment variables passed here are not the only ones passed into the job. Following set are injected into the job automatically, in the order mentioned here:- Any variable you have added in LocalOps console UI are passed to the job.
- In addition, any cloud
dependencies
>exports
variables you declared using below and - Other preview service
previews.depdendencies
>exports
variables are added too. - Variables in
env
above init job config
Order and errors
Jobs are run in the order defined in the array. If any of the init jobs fail to complete successfully, deployment fails and the mainweb
service won’t start.
Automatic provisioning of cloud resources
Your service may need a S3 bucket to store files, RDS instance to host your Postgres/MySQL database or Redis database for caching, background jobs, etc., You can declare these dependencies once inops.json
. LocalOps will provision these resources and ensure they are
active & enabled for your service to use.
How to provision a cloud resource
In yourops.json
, write a key called dependencies
and fill up your dependency in the following format.
ID (id
)
Specify a unique alpha numeric string that is unique amongst all the other resouces of same resource type (say S3).
LocalOps uses this to uniquely identify the resource, provision and manage it on your behalf in your cloud.
If you change the ID (id
), the resource will be deleted and replaced by any other new one.
Prefix (prefix
)
You can provide a prefix string to use in the name of all resources. In the above case, bucket created will have a name
like attachmentsbucket-23490823940
. If you change prefix string, the resource will be deleted and replaced by any
other new one.
Environment Variables (exports
)
LocalOps automatically injects environment variables for each provisioned resource to your service contaienrs.
You can define the environment variable keys and specify the properties of the cloud resource to fill it up. So in above
example, LocalOps will form an environment variable by the name ATTACHMENTS_BUCKET_ARN
and pass ARN of the new S3
bucket to it. So that in your code, you can use it like:
Skip preview (skip_preview
)
The specific resource isn’t considered for preview services / ephemeral environments if this is true
.
Preview only (preview_only
)
The specific resource is considered only for preview services / ephemeral environments if this is true
.
Overall Lifecycle
Resources are read fromops.json
when the service is being
deployed.
- If the resource dependency with the given
id
is seen for the first time, it is provisioned. - If the resource dependency with the an existing
id
is missing, it is deleted automatically during the next service deployment . - If any of the resource dependency properties are modified, they are updated automatically during the next service deployment.
Accessing cloud resources
Your code can access these resources using the environment variables defined inops.json
. For example, you can add
files/objects to the S3 bucket you defined in ops.json
(see above), for example:
Automatic role based access
In AWS based environments you spin up, every container (your code) runs with a pre-configured IAM role that provides implicit, authenticated, key-less access to any cloud resource (say S3). You can see the name of pre-configured IAM role in environment’s dashboard. Any IAM policy can be added to the pre-configured role to give your code appropriate access to cloud resources. For the resources defined inops.json
, LocalOps automatically generates necessary IAM permissions and add them as
policies in the pre-conigured IAM role. Your code can just access the resources defined in ops.json
by name and all
calls made will be authenticated calls by default.
For example, for the S3 bucket defined in ops.json
, LocalOps automatically adds the following necessary IAM
permissions in the pre-configured IAM role.
- Read and write objects
- List bucket contents
Supported cloud resources
As of now, you can define the following dependencies inops.json
. We plan to grow this list to cover other services in
AWS. Please get in touch with us at help@localops.co if you need anything specific.
- Amazon S3 buckets - for static object storage (files, pictures, etc.,). Learn more
- Amazon RDS Instances - for hosting Postgres, MySQL databases. Learn more
- Amazon Elastic cache clusters - for caching, queues, pub/sub. Learn more
- Amazon SNS topics - for pub/sub capabilities. Learn more
- Amazon SQS queues - for queueing (standard/FIFO) capabilities. Learn more
Configured for production use, by default
All cloud resources are configured with production-grade security:- Encryption at rest: Enabled by default, for all data storage services (S3, RDS, Elastic Cache)
- Network isolation: Resources are deployed within your environment’s VPC wherever applicable.
- Private only: Resources are deployed for private access only, from within your environment’s VPC wherever applicable.
- Access control: Least-privilege access is configured by default for applicable resources, using fine-grained IAM policies and IAM role.
- High availability: Multi AZ setup is supported in configuration and can be turned on when needed
- Monitoring: Cloudwatch monitoring is turned on by default for applicable cloud resources. These metrics and logs can be seen inside out built-in monitoring dashboard.
- Backups tuned on: Backups are turned ON for all applicable data storage services (RDS, Elastic Cache) with 30-day retention.
Tuned For Ephemeral Preview environments
LocalOps can spin up ephemeral preview environments for your service, for each Github pull request to the configured target branch of the service. Configuration for the supported cloud resources are tuned differently for these preview environments. To ensure- preview environments come up fast
- preview enviromments don’t incur a lot of costs (say in the case of RDS)
- Backups
- Cloudwatch monitoring
- Multi-AZ setup
- Encryption
Ephemeral databases and services for pull request previews
LocalOps can spin up a preview environment / preview service / ephemeral copy for your service when a pull request is raised at the configured target branch of a service. For each of these preview services, you can spin up isolated set of databases, caches, queues and workers that are created and deleted when the PR is opened and closed/merged. Following resources can be created this way:- Databases - Postgres / MySQL
- Cache services - Redis / Memcache
- Queues - RabbitMQ
- Workers - using your code or any public docker image
previews
, like below:
Isolated databases
You can spin up any number of these databases for each PR, with any publicly available version:- Postgres
- Mysql
ops.json
.
In the ops.json
, just define a key called db
under previews
> dependencies
like below and add your database dependencies as an array.
Example:
db
:
id
- Unique identifier to identify this resource within the environmentprefix
- String that will be used as prefix in the name of the resourceengine
-mysql
orpostgres
version
- any public version of mysql or postgres available in docker hubexports
- Information of the database resource can be exported to be used within your service. Define a set of key value pairs that will be injected as environment variables in your service. Keys will be injected as such. Values are interpreted as follows.
$host
- DNS host of the resource, privately reachable within the cluster. All resources are private.$port
- DNS port of the resource. For eg., you will see $port set as3306
for MySQL and5432
for Postgres.$db
- Database name to use from your code to read/write data. LocalOps creates a default database for you when provisioning the database server.$user
- Username to use while authenticating into the database. LocalOps creates a default user for you when provisioning the database server and you can use this variable to get its value injected in your environment variable.$pass
- Password to use while authenticating into the database. LocalOps creates this for you and passes it to you via this var.
Changing
id
or prefix
will swap the resource with a new one in the next deployment. All databases are ephemeral databases only. No persistent storage is configured, as they are created for previewing pull requests only.
Isolated cache servers
You can spin up any number of these caching servers for each PR, with any publicly available version:- Redis
- Memcache
ops.json
.
In the ops.json
, just define a key called cache
under previews
> dependencies
like below and add your caching dependencies as an array.
Example:
cache
:
id
- Unique identifier to identify this resource within the environmentprefix
- String that will be used as prefix in the name of the resourceengine
-redis
ormemcache
version
- any public version of redis or memcache available in docker hubexports
- Information of the caching server can be exported to be used within your service. Define a set of key value pairs that will be injected as environment variables in your service. Keys will be injected as such. Values are interpreted as follows.
$host
- DNS host of the resource, privately reachable within the cluster. All resources are private.$port
- DNS port of the resource. For eg., you will see $port set as6379
for Redis and11211
for Memcache.
Changing
id
or prefix
will swap the resource with a new worker in the next deployment. All caching servers are ephemeral services only. No persistent storage is configured, as they are created for previewing pull requests only.
Isolated queue/messaging servers
You can spin up any number of these queueing/messaging servers for each PR, with any publicly available version:- RabbitMQ
ops.json
.
In the ops.json
, just define a key called queues
under previews
> dependencies
like below and add your dependencies as an array.
Example:
cache
:
id
- Unique identifier to identify this resource within the environmentprefix
- String that will be used as prefix in the name of the resourceengine
-rabbitmq
version
- any public version of rabbitmq available in docker hubexports
- Information of the messaging/queueing server can be exported to be used within your service. Define a set of key value pairs that will be injected as environment variables in your service. Keys will be injected as such. Values are interpreted as follows.
$host
- DNS host of the resource, privately reachable within the cluster. All resources are private.$port
- DNS port of the resource. For eg., you will see $port set as5672
for RabbitMQ.$vhost
- Virtual host of rabbitmq that is created for you to use.$user
- Username to use while authenticating into this server. LocalOps creates a default user for you when provisioning the RabbitMQ server and you can use this variable to get its value injected in your environment variable.$pass
- Password to use while authenticating into this server. LocalOps creates this for you and passes it to you via this variable.
Changing
id
or prefix
will swap the resource with a new one in the next deployment. All queueing servers are ephemeral services only. No persistent storage is configured, as they are created for previewing pull requests only.
Isolated workers
You can spin up any number of workers for each PR using one of these methods:- Run any publicly available docker image from Docker hub
- Run your existing code from the PR but with a different start command
ops.json
, just define a key called workers
under previews
like below and add your workers as an array.
Example:
worker
:
id
- Unique identifier to identify this resource within the environmentprefix
- String that will be used as prefix in the name of the resourceimage
- Image name with tag/version. Eg.,nginx:1.29
. This is optional. If theimage
is not provided, we use the same image that was used to run your original service, which was built with the latest commit in your PR.cmd
- start command. Used on theimage
you provide in this config or the image LocalOps built using latest commit in your PR.env
- Pass any environment variable necessary to run this image. In addition, when you don’t provide animage
in this config, the image LocalOps built using latest commit in the PR will be used and all the secrets you added for this service in LocalOps console UI will also be injected into the worker.count
- Number of replicas / containers of the worker to run
Changing
id
or prefix
will swap the resource with a new one in the next deployment. Configuring Cron service
It is a common pattern in many codebases and communities, to define cron logic inside HTTP request handlers and call those HTTP endpoints on a specific schedule. You can do the same here inops.json
.
Cron arguments
Definecron
key in ops.json
and add any number of scheduled hits by giving
- schedule: Specify a spechedule using the same powerful crontab syntax. Refer here on how to define this.
- path: Specify a path to hit. Your service’s internal HOST is known at run time and hence you don’t have to define the
URL like
https://<host>/path
.
Complete Example
Here’s a comprehensiveops.json
example: