Declarative & Automatic using ops.json
:
For your service, if you need one or more RDS instances, you can add ops.json
in the root directory of Github repo
you’ve connected for the service.
Learn more about configuring dependencies using
ops.json
here.id
- Alphanumeric string. Must be unique amongst the instances you’ve declared above. Changing this string will replace the original instance with new one.prefix
- Alphanumeric string. Will be used as a prefix in the name of your instance. Changing this string will replace the original instance with new instance.engine
- Provide eitherpostgres
ormysql
. Default:postgres
version
- Pick a version for your postgres or mysql database. Ensure RDS supports it, by referring to AWS docs - Postgres / MySQL. Default:""
- so it will provision latest version as decided by AWS.storage_gb
- Specify the amount of storage you need in GB. You can set one value and yet change this later. Default:10
instance_type
- Specify any class of instance in the following groups - “db.t4g.”, “db.t3.”, “db.m5.*”. Refer to AWS docs for available sizes. Default:db.t4g.small
publicly_accessible
- Set it totrue
if you need this database to be available publicly on the internet. You would want this to befalse
most of the time. Default:false
multi_az
- Specifies if the RDS instance is multi-AZmaintenance_window
- The window to perform maintenance in. Syntax: “ddd:hh24:mi-ddd:hh24:mi”. Eg: “Mon:00:00-Mon:03:00”. See RDS Maintenance Window docs for more information.exports
- Set of key value pairs. Keys are the ENVIRONMENT VARS we will pass to your code / containers. Values are the properties of the RDS instance provisioned. See below for available properties.
Changing
id
or prefix
string will delete & replace the original RDS instance with new one.exports
:
$name
- Name of the RDS instance$arn
- Amazon resource name of the RDS instance. Eg.,arn:rds:..
$endpoint
- The connection endpoint inaddress:port
format.$address
- Specifies the DNS address of the DB instance.$dbName
- Specifies the database name created under the RDS instance. This is auto-created for you so you won’t specific this name explicitly anywhere in your config.$username
- Specifics the username of the database user. You will typically use this in your code’s Db connection string to access the database$dbName
.$passwordArn
- Password of the user$username
is automatically generated, encrypted and maintained by RDS in AWS secret manager. Your code has to read the password from the AWS secret manager using this$passwordArn
.
Lifecycle:
If you have providedops.json
at the root of the git repository, it will be processed if a corresponding service in
any of your active environment points at the same repository as source and when a deployment is triggered.
- When the service is spinned up first time or when a new deployment is triggered,
ops.json
is parsed for processing. Resources declared in thedependencies
object will be provisioned before your code starts to run. - Resources with same
id
are provisioned only once for the life of the service. And updated when there is a change in one of the properties above. - Keys in
exports
object will be passed as enviroment variables to your service. - When the service is deleted, the provisioned RDS instances are deleted from the cloud account immediately & automatically.
Private only access:
RDS databases can be accessed from your code just as usual using your SQL-compatible DB libraries or ORMs. All RDS instances are created only in the private subnets of the same VPC where your environment is running. And they are attached with following security group. Ingress:- From source:
10.0.0.0/16
(Your environment’s VPC CIDR IP range) - At port:
5432
(for Postgres) or3306
(for MySQL) - Protocol:
TCP
Reading Database password:
We have configured such that RDS will creates & encrypts a unique random password and store it in your AWS account’s secret manager. You can capture ARN of this secret under an environment variable like shown above and fetch the password value using AWS SDK.Pre-configured for production use:
All RDS instances are pre-configured for production use.- Daily backup is enabled. With 30-day retention for each backup.
- Encryption is enabled to safeguard data at rest.
- Backup is created automatically when the instance is about to get deleted.
Manually provision new RDS Instance:
To proceed with this guide, you would need access to:- AWS console with permissions to create RDS database
- LocalOps console
1. Fetch the VPC and private subnet IDs
To give your environment and service(s) access to a RDS database, you need to create the RDS database within the same VPC and region used by the environment. Navigate to LocalOps console and the environment you created. In the environment’s overview section,- Pick/copy the VPC ID and Private Subnet IDs
- Note down the region
2. Create RDS database
Login to the same AWS account and region where you created the environment using LocalOps. And create new RDS database with appropriate engine - Postgres/MySQL. You will have to create a new subnet group first before creating the RDS instance. While creating the subnet group, use the VPC ID and private subnet IDs you copied above.For the same app, if you already have a RDS database in a different AWS account, different VPC or in a different
region, you can very well use the same and skip creating a new RDS database. We can connect your environment/services
and the database via VPC peering. Learn more about using an existing database below.
3. Add DB endpoint and credentials as secrets
Last step is to give your service(s) access to the RDS database. In LocalOps console, navigate the Service settings within the corresponding environment. In secrets section, add a new key value pair likeDB_*
environment variables to connect and
access the database.
Repeat the above process for each RDS instance you want to create for each environment - test, staging, production,
etc.,
That’s it!
Use pre-existing RDS Instance
1. Setup VPC peering
Your LocalOps environment is created in its own VPC in the chosen AWS account and region. For the same application, if you already have a RDS database instance running in the same/different AWS account, you can connect/access it with the corresponding LocalOps environment using Amazon VPC peering. VPC peering lets you connect two VPCs from same or different AWS account/region so that resources in one VPC can access the resources in another VPC like they belong to the same network. Ping us on slack / write to us at help@localops.co / book a call with our team to guide you on this.2. Add DB endpoint and credentials as secrets
Last step is to give your service(s) access to the RDS database. In LocalOps console, navigate the Service settings within the corresponding environment. In secrets section, add a new key value pair likeDB_*
environment variables to connect and
access the database.
That’s it. 🎉