Your service(s) may want to store/access data in a Redis/Memcache instance. This guide will show you steps to create and
access Amazon ElastiCache cluster that offers fully managed Redis/Memcache databases.
Declarative & Automatic using ops.json:
For your service, if you need one or more ElastiCache clusters, you can add ops.json in the root directory of Github
repo.
Learn more about configuring dependencies using
ops.json here.
And add ElastiCache clusters as a dependency, like below.
{
"dependencies": {
"elasticache": {
"clusters": [
{
"id": "test123",
"prefix": "testdep123",
"engine": "redis",
"version": "7.0",
"instance_type": "cache.t4g.small",
"num_nodes": 1,
"exports": {
"MY_ELASTICACHE_CLUSTER_NAME": "$name",
"MY_ELASTICACHE_CLUSTER_ARN": "$arn",
"MY_ELASTICACHE_CLUSTER_ENDPOINT": "$endpoint",
"MY_ELASTICACHE_CLUSTER_ADDRESS": "$address",
"MY_ELASTICACHE_CLUSTER_PORT": "$port"
}
}
]
}
}
}
You can add as many clusters as you want.
Arguments you can add in each cluster object above:
id - Alphanumeric string. Must be unique amongst the instances you’ve declared above. Changing this string will
replace the original cluster with new one.
prefix - Alphanumeric string. Will be used as a prefix in the name of your cluster. Changing this string will
replace the original cluster with new cluster.
engine - Provide either redis or memcache. Default: “. If you leave this out, we will skip creating the
cluster.
version - Pick a version for your redis or memcache database. Ensure Elastic Cache supports it, by referring to
AWS docs.
instance_type - Specify any class of instance in the following groups - “cache.*”. Refer to
AWS docs for available
sizes. If you leave this out, we will skip creating the cluster.
num_nodes - The initial number of cache nodes that the cache cluster will have. For Redis, this value must be 1.
For Memcached, this value must be between 1 and 40. If this number is reduced on subsequent runs, the highest
numbered nodes will be removed.
exports - Set of key value pairs. Keys are the ENVIRONMENT VARS we will pass to your code / containers. Values are
the properties of the ElastiCache cluster provisioned. See below for available properties.
Changing id or prefix string will delete & replace the original ElastiCache cluster with new one.
Available properties of the ElastiCache cluster to use in exports:
$name - Name of the ElastiCache cluster
$arn - Amazon resource name of the ElastiCache cluster
$endpoint - The connection endpoint in address:port format.
$address - Specifies the DNS address of the ElastiCache master node.
$port - Specifies the port you will use to access the cluster node.
Lifecycle:
If you have provided ops.json at the root of the git repository, it will be processed if a corresponding service in
any of your active environment points at the same repository as source and when a deployment is triggered.
- When the service is spinned up first time or when a new deployment is triggered,
ops.json is parsed for processing.
Resources declared in the dependencies object will be provisioned before your code starts to run.
- Resources with same
id are provisioned only once for the life of the service. And updated when there is a change in
one of the properties above.
- Keys in
exports object will be passed as enviroment variables to your service.
- When the service is deleted, the provisioned elastic cache clusters are deleted from the cloud account immediately &
automatically.
Private only access:
Elastic cache clusters can be accessed from your code just as usual using endpointoraddress above, and your DB
libraries.
All Elastic cache clusters are created only in the private subnets of the same VPC where your environment is running.
And they are attached with following security group.
Ingress:
- From source:
10.0.0.0/16 (Your environment’s VPC CIDR IP range)
- At port:
6379 (for Redis) or 11211 (for Memcache)
- Protocol:
TCP
So only workload/containers/servers from within your environment’s VPC can access the Elastic cache cluster
All elastic cache clusters are pre-configured for production use.
- Daily backup is enabled. With 30-day retention for each backup.
- Encryption is enabled to safeguard data at rest.
- Backup is created automatically when the cluster is about to get deleted.
For speed and cost savings reasons, all the above are turned off in ephemeral services we create in the case of
pull request previews.
Manual provisioning of new Elastic-cache cluster:
To proceed with this guide, you would need access to:
- AWS console with permissions to create RDS database
- LocalOps console
Goal is to create a Elasticache instance (Redis, for this guide) in AWS and give your environment/service(s) access to
it. Please follow the steps below to proceed:
1. Fetch the VPC and private subnet IDs
To give your environment and service(s) access to a Elasticache instance, you need to create the elasticache instance
within the same VPC and region used by the environment.
Navigate to LocalOps console and the environment you created. In the environment’s overview section,
- Pick/copy the VPC ID and Private Subnet IDs
- Note down the region
You will use these in the next step while creating the Elasticache instance.
2. Create Elasticache instance
Login to the same AWS account and region where you created the environment using LocalOps. And create new Elasticache
database with appropriate engine - Postgres/MySQL.
You may have to create a subnet group first before creating the Elasticache instance. While creating the subnet group,
use the same VPC ID and private subnet IDs you copied above.
For the same app, if you already have a Elasticache instance in a same or different AWS account, VPC or region, you
can very well use the same and skip creating a new Elasticache instance. We can connect your environment/services and
the database via VPC peering. Learn more about using an existing database below.
You can follow this
official AWS guide
to create your Redis cache.
Elasticache offers two deployment modes - ElastiCache Serverless & Self-designed ElastiCache clusters. Steps to
connect either of them to your environments managed by LocalOps, are same.
Record the database endpoint, username and password.
3. Add instance endpoint and credentials as secrets
Last step is to give your service(s) access to the RDS database. In LocalOps console, navigate the Service settings
within the corresponding environment.
In secrets section, add a new key value pair like
- Key: `REDIS_HOST`
- Value: `your-instance-endpoint`
- Key: `REDIS_USERNAME`
- Value: `your-instance-username`
- Key: `REDIS_PASS`
- Value: `your-instance-password`
Learn more about secrets here.
You can name the key in any way you want. In your code, you can access REDIS_* environment variables to connect and
access the database.
Repeat the above process for each Elasticache instance you want to create for each environment - test, staging,
production, etc.,
That’s it!
Use pre-existing Elasticache instance
1. Setup VPC peering
Your LocalOps environment is created in its own VPC in the chosen AWS account and region. For the same application, if
you already have a Elasticache instance running in the same/different AWS account, you can connect/access it with the
corresponding LocalOps environment using Amazon VPC peering.
VPC peering lets you connect two VPCs from same or different AWS account/region so that resources in one VPC can access
the resources in another VPC like they belong to the same network.
Ping us on slack / write to us at help@localops.co / book a call with our team
to guide you on this.
2. Add instance endpoint and credentials as secrets
Last step is to give your service(s) access to the RDS database. In LocalOps console, navigate the Service settings
within the corresponding environment.
In secrets section, add a new key value pair like
- Key: `REDIS_HOST`
- Value: `your-instance-endpoint`
- Key: `REDIS_USERNAME`
- Value: `your-instance-username`
- Key: `REDIS_PASS`
- Value: `your-instance-password`
Learn more about secrets here.
You can name the key in any way you want. In your code, you can access REDIS_* environment variables to connect and
access the database.
That’s it. 🎉