Declarative & Automatic using ops.json
:
For your service, if you need one or more ElastiCache clusters, you can add ops.json
in the root directory of Github
repo.
Learn more about configuring dependencies using
ops.json
here.id
- Alphanumeric string. Must be unique amongst the instances you’ve declared above. Changing this string will replace the original cluster with new one.prefix
- Alphanumeric string. Will be used as a prefix in the name of your cluster. Changing this string will replace the original cluster with new cluster.engine
- Provide eitherredis
ormemcache
. Default: “. If you leave this out, we will skip creating the cluster.version
- Pick a version for your redis or memcache database. Ensure Elastic Cache supports it, by referring to AWS docs.instance_type
- Specify any class of instance in the following groups - “cache.*”. Refer to AWS docs for available sizes. If you leave this out, we will skip creating the cluster.num_nodes
- The initial number of cache nodes that the cache cluster will have. For Redis, this value must be 1. For Memcached, this value must be between 1 and 40. If this number is reduced on subsequent runs, the highest numbered nodes will be removed.exports
- Set of key value pairs. Keys are the ENVIRONMENT VARS we will pass to your code / containers. Values are the properties of the ElastiCache cluster provisioned. See below for available properties.
Changing
id
or prefix
string will delete & replace the original ElastiCache cluster with new one.exports
:
$name
- Name of the ElastiCache cluster$arn
- Amazon resource name of the ElastiCache cluster$endpoint
- The connection endpoint inaddress:port
format.$address
- Specifies the DNS address of the ElastiCache master node.$port
- Specifies the port you will use to access the cluster node.
Lifecycle:
If you have providedops.json
at the root of the git repository, it will be processed if a corresponding service in
any of your active environment points at the same repository as source and when a deployment is triggered.
- When the service is spinned up first time or when a new deployment is triggered,
ops.json
is parsed for processing. Resources declared in thedependencies
object will be provisioned before your code starts to run. - Resources with same
id
are provisioned only once for the life of the service. And updated when there is a change in one of the properties above. - Keys in
exports
object will be passed as enviroment variables to your service. - When the service is deleted, the provisioned elastic cache clusters are deleted from the cloud account immediately & automatically.
Private only access:
Elastic cache clusters can be accessed from your code just as usual using address above, and your DB libraries. All Elastic cache clusters are created only in the private subnets of the same VPC where your environment is running. And they are attached with following security group. Ingress:- From source:
10.0.0.0/16
(Your environment’s VPC CIDR IP range) - At port:
6379
(for Redis) or11211
(for Memcache) - Protocol:
TCP
Pre-configured for production use:
All elastic cache clusters are pre-configured for production use.- Daily backup is enabled. With 30-day retention for each backup.
- Encryption is enabled to safeguard data at rest.
- Backup is created automatically when the cluster is about to get deleted.
Manual provisioning of new Elastic-cache cluster:
To proceed with this guide, you would need access to:- AWS console with permissions to create RDS database
- LocalOps console
1. Fetch the VPC and private subnet IDs
To give your environment and service(s) access to a Elasticache instance, you need to create the elasticache instance within the same VPC and region used by the environment. Navigate to LocalOps console and the environment you created. In the environment’s overview section,- Pick/copy the VPC ID and Private Subnet IDs
- Note down the region
2. Create Elasticache instance
Login to the same AWS account and region where you created the environment using LocalOps. And create new Elasticache database with appropriate engine - Postgres/MySQL. You may have to create a subnet group first before creating the Elasticache instance. While creating the subnet group, use the same VPC ID and private subnet IDs you copied above.For the same app, if you already have a Elasticache instance in a same or different AWS account, VPC or region, you
can very well use the same and skip creating a new Elasticache instance. We can connect your environment/services and
the database via VPC peering. Learn more about using an existing database below.
3. Add instance endpoint and credentials as secrets
Last step is to give your service(s) access to the RDS database. In LocalOps console, navigate the Service settings within the corresponding environment. In secrets section, add a new key value pair likeREDIS_*
environment variables to connect and
access the database.
Repeat the above process for each Elasticache instance you want to create for each environment - test, staging,
production, etc.,
That’s it!
Use pre-existing Elasticache instance
1. Setup VPC peering
Your LocalOps environment is created in its own VPC in the chosen AWS account and region. For the same application, if you already have a Elasticache instance running in the same/different AWS account, you can connect/access it with the corresponding LocalOps environment using Amazon VPC peering. VPC peering lets you connect two VPCs from same or different AWS account/region so that resources in one VPC can access the resources in another VPC like they belong to the same network. Ping us on slack / write to us at help@localops.co / book a call with our team to guide you on this.2. Add instance endpoint and credentials as secrets
Last step is to give your service(s) access to the RDS database. In LocalOps console, navigate the Service settings within the corresponding environment. In secrets section, add a new key value pair likeREDIS_*
environment variables to connect and
access the database.
That’s it. 🎉