Your service(s) may want to store/access data in Postgres/MySQL database. This guide will show you steps to create and
access Amazon RDS databases that offer fully managed Postgres/MySQL databases.
Declarative & Automatic using ops.json:
For your service, if you need one or more RDS instances, you can add ops.json in the root directory of Github repo
you’ve connected for the service.
Learn more about configuring dependencies using
ops.json here.
And add RDS instances as a dependency, like below.
{
"dependencies": {
"rds": {
"instances": [
{
"id": "test123",
"prefix": "testdep123",
"engine": "postgres",
"version": "17.5",
"storage_gb": 10,
"instance_type": "db.t4g.small",
"publicly_accessible": false,
"exports": {
"MY_RDS_INSTANCE_NAME": "$name",
"MY_RDS_INSTANCE_ARN": "$arn",
"MY_RDS_INSTANCE_ENDPOINT": "$endpoint",
"MY_RDS_INSTANCE_ADDRESS": "$address",
"MY_RDS_INSTANCE_USERNAME": "$username",
"MY_RDS_INSTANCE_PASSWORD_ARN": "$passwordArn",
"MY_RDS_INSTANCE_DB_NAME": "$dbName"
}
}
]
}
}
}
You can add as many RDS instances as you want.
Arguments you can add in each instance object above:
id - Alphanumeric string. Must be unique amongst the instances you’ve declared above. Changing this string will
replace the original instance with new one.
prefix - Alphanumeric string. Will be used as a prefix in the name of your instance. Changing this string will
replace the original instance with new instance.
engine - Provide either postgres or mysql. Default: postgres
version - Pick a version for your postgres or mysql database. Ensure RDS supports it, by referring to AWS docs -
Postgres /
MySQL. Default: "" - so
it will provision latest version as decided by AWS.
storage_gb - Specify the amount of storage you need in GB. You can set one value and yet change this later.
Default: 10
instance_type - Specify any class of instance in the following groups - “db.t4g.”, “db.t3.”, “db.m5.*”. Refer to
AWS docs for available sizes. Default: db.t4g.small
publicly_accessible - Set it to true if you need this database to be available publicly on the internet. You
would want this to be false most of the time. Default: false
multi_az - Specifies if the RDS instance is multi-AZ
maintenance_window - The window to perform maintenance in. Syntax: “ddd:hh24:mi-ddd:hh24:mi”. Eg:
“Mon:00:00-Mon:03:00”. See
RDS Maintenance Window docs
for more information.
exports - Set of key value pairs. Keys are the ENVIRONMENT VARS we will pass to your code / containers. Values are
the properties of the RDS instance provisioned. See below for available properties.
Changing id or prefix string will delete & replace the original RDS instance with new one.
Available properties of the RDS instance to use in exports:
$name - Name of the RDS instance
$arn - Amazon resource name of the RDS instance. Eg., arn:rds:..
$endpoint - The connection endpoint in address:port format.
$address - Specifies the DNS address of the DB instance.
$dbName - Specifies the database name created under the RDS instance. This is auto-created for you so you won’t
specific this name explicitly anywhere in your config.
$username - Specifics the username of the database user. You will typically use this in your code’s Db connection
string to access the database $dbName.
$passwordArn - Password of the user $username is automatically generated, encrypted and maintained by RDS in AWS
secret manager. Your code has to read the password from the AWS secret manager using this $passwordArn.
Lifecycle:
If you have provided ops.json at the root of the git repository, it will be processed if a corresponding service in
any of your active environment points at the same repository as source and when a deployment is triggered.
- When the service is spinned up first time or when a new deployment is triggered,
ops.json is parsed for processing.
Resources declared in the dependencies object will be provisioned before your code starts to run.
- Resources with same
id are provisioned only once for the life of the service. And updated when there is a change in
one of the properties above.
- Keys in
exports object will be passed as enviroment variables to your service.
- When the service is deleted, the provisioned RDS instances are deleted from the cloud account immediately &
automatically.
Private only access:
RDS databases can be accessed from your code just as usual using your SQL-compatible DB libraries or ORMs.
All RDS instances are created only in the private subnets of the same VPC where your environment is running. And they
are attached with following security group.
Ingress:
- From source:
10.0.0.0/16 (Your environment’s VPC CIDR IP range)
- At port:
5432 (for Postgres) or 3306 (for MySQL)
- Protocol:
TCP
So only workload/containers/servers from within your environment’s VPC can access the RDS instance.
Reading Database password:
We have configured such that RDS will creates & encrypts a unique random password and store it in your AWS account’s
secret manager. You can capture ARN of this secret under an environment variable like shown above and fetch the password
value using AWS SDK.
All RDS instances are pre-configured for production use.
- Daily backup is enabled. With 30-day retention for each backup.
- Encryption is enabled to safeguard data at rest.
- Backup is created automatically when the instance is about to get deleted.
For speed and cost savings reasons, all the above are turned off in ephemeral services we create in the case of
pull request previews.
Manually provision new RDS Instance:
To proceed with this guide, you would need access to:
- AWS console with permissions to create RDS database
- LocalOps console
Goal is to create a RDS database in AWS and give your environment/service(s) access to it. Please follow the steps below
to proceed:
1. Fetch the VPC and private subnet IDs
To give your environment and service(s) access to a RDS database, you need to create the RDS database within the same
VPC and region used by the environment.
Navigate to LocalOps console and the environment you created. In the environment’s overview section,
- Pick/copy the VPC ID and Private Subnet IDs
- Note down the region
You will use these in the next step while creating the RDS database.
2. Create RDS database
Login to the same AWS account and region where you created the environment using LocalOps. And create new RDS database
with appropriate engine - Postgres/MySQL.
You will have to create a new subnet group first before creating the RDS instance. While creating the subnet group, use
the VPC ID and private subnet IDs you copied above.
For the same app, if you already have a RDS database in a different AWS account, different VPC or in a different
region, you can very well use the same and skip creating a new RDS database. We can connect your environment/services
and the database via VPC peering. Learn more about using an existing database below.
For rest of the instructions, you can follow this
official AWS guide to create your
database.
Record the database endpoint, username and password.
3. Add DB endpoint and credentials as secrets
Last step is to give your service(s) access to the RDS database. In LocalOps console, navigate the Service settings
within the corresponding environment.
In secrets section, add a new key value pair like
- Key: `DB_HOST`
- Value: `your-db-endpoint`
- Key: `DB_USERNAME`
- Value: `your-db-username`
- Key: `DB_PASS`
- Value: `your-db-password`
Learn more about secrets here.
You can name the key in any way you want. In your code, you can access DB_* environment variables to connect and
access the database.
Repeat the above process for each RDS instance you want to create for each environment - test, staging, production,
etc.,
That’s it!
Use pre-existing RDS Instance
1. Setup VPC peering
Your LocalOps environment is created in its own VPC in the chosen AWS account and region. For the same application, if
you already have a RDS database instance running in the same/different AWS account, you can connect/access it with the
corresponding LocalOps environment using Amazon VPC peering.
VPC peering lets you connect two VPCs from same or different AWS account/region so that resources in one VPC can access
the resources in another VPC like they belong to the same network.
Ping us on slack / write to us at help@localops.co / book a call with our team
to guide you on this.
2. Add DB endpoint and credentials as secrets
Last step is to give your service(s) access to the RDS database. In LocalOps console, navigate the Service settings
within the corresponding environment.
In secrets section, add a new key value pair like
- Key: `DB_HOST`
- Value: `your-db-endpoint`
- Key: `DB_USERNAME`
- Value: `your-db-username`
- Key: `DB_PASS`
- Value: `your-db-password`
Learn more about secrets here.
You can name the key in any way you want. In your code, you can access DB_* environment variables to connect and
access the database.
That’s it. 🎉