Sep 9, 2025
Run any Helm Chart as service
You can now spin up a new service using any public Helm Chart.While creating a new service, just pick “Helm chart” as the Source and provide- Helm repo url
- Helm chart name
- Values.yaml
Interacting with the chart and its internal services:
Say you want to run Postgresql as a service using its Helm chart.Case 1: Other services to use chart’s services
Say you want your NodeJS app to use Postgres as its database.The other service (NodeJS service) from the same environment can use Postgresql database running as part of the Helm Chart installation. Using “service alias” of the helm chart, one can frame the hostname for the chart’s service (postgresql) using the format below.hostname
for the corresponding service.<service-name>
is the name of the service that is running as part of Helm chart installation and is of type NodePort
or ClusterIP
. Refer to your Helm chart’s documentation to find the <service-name>
.For our current example: You can add this as DB_HOST secret under secrets section of the NodeJS app service and use it in your code via env var.Case 2: Charts to use another chart’s services
Say you want to use Temporal and want it to use Postgres as its database.A Helm chart running within the same environment as a service, can use services running as part of another helm chart by using their service alias as an alias for hostname. And provide them as below in Values.yaml configuration.{{
and }}
.hosts
here is a helper function we support today that can bring the internal dns hostname of any service running within the same LocalOps environment.For current example, run Temporal chart and Postgres chart as services within your LocalOps environemnt. And provide the following in Temporal chart’s values.yaml configuration.Aug 29, 2025
Run any Docker Image as service
You can now spin up a service using any pre-built docker image.While creating a new service, just pick Docker Image as source and provide- Docker registry url
- Docker image name
latest
.All options supported in a traditional service are supported in Docker image services also. You can:- Run it as
web
,worker
,internal
service,job
orcron
. - Configure environment variables and secrets
- Specific resource requirements like CPU and Memory
- Run any number of copies of the container to scale it based on load
- Attach custom domain if the service is of type
web
.
July 30, 2025
Ephemeral database, queues, cache and workers for Preview environments
You can now spin up temporary copies of databases, cache and queue servers for every PR you test using preview environments.Supported services:
Databases:- MySQL
- Postgres
- Redis
- Memcache
- RabbitMQ
June 27, 2025
New UI, 1-click service deployments and more
We shipped a brand new UI this week. It is based out of a consisent and modern design language which allows devs to navigate between environments & services faster. And we believe it looks way more cleaner and sexier than ever.It does come in light and dark themes so you can pick what you prefer. Sign in to LocalOps again and let us know what you think!

1 click service deployments:
From now, you can trigger a new deployment directly from the same place where you create the service. Earlier, this was a separate step after the service is created. This minor change allows devs to spin up the new service instantly without having to perform any other step.Plus we took this opportunity to make a ton of bug fixes throughout the console that makes the overall experience far more stable than ever.June 18, 2025
Rollback deployments
You can now roll back to any previous commit directly from the LocalOps console.In Services > deployments tab, just press on the “Re-deploy” button in any of the past successful deployments. We’ll fetch that commit, rebuild, and redeploy it—restoring your app exactly as it was.Previously, we relied on developers to push a new commit to reverse changes. But in reality, that’s often painful. Reverting complex changes in a fresh commit isn’t always straightforward.So we’ve built an explicit rollback mechanism—a safer, cleaner, and faster way to go back to a known-good version.Read more on our release blog post here.June 9, 2025
Configure cloud resources using ops.json
You can add a new file calledops.json
in your Git repo to declare cloud resources for services to use. Including AWS
S3 buckets, SNS topics, SQS queues, RDS instances, ElasticCache clusters etc.,LocalOps will provision these resources in your cloud account with production grade security, when the corresponding
service is deployed.May 29, 2025
Introducing AI Code reviews
We are releasing a beta version of AI code reviews today in LocalOps to let code reviews be a blissful process in your team instead of being a necessary bottleneck.You can turn on AI code reviews when you create a new service within LocalOps. Just flip this switch and our AI will start listening on all your pull requests.

May 22, 2025
May 13, 2025
Introducing Ephemeral environments for Pull requests
We just pushed a major capability today.We have released Pull request previews today to unblock your team to deploy, test each pull request in an isolated setup. In your current app environment, say you have two services- backend
- frontend

May 9, 2025
Faster builds + Deploy Microservices and Cloud Resources on AWS
Faster builds
We just pushed a major change to our build setup today.From now, the build infrastructure that pulls your code & builds the docker image out of your Dockerfile, is dedicated, isolated & different from the infrastructure used by your neighbouring accounts within LocalOps. This means that when you push your code, it gets pulled and built right away. Builds no longer have to wait for their turn.Deploy Micro Services
Each service that exposes a port, gets a new uniqueservice alias
. It is a string that you can see in Service screen
and which you can use as an alias for the specific service’s internal DNS host name.You can add this service alias
as a secret
in the dependent service. At run time, LocalOps replaces that string with
the actual internal DNS host name. Your code can simply consume the secret as environment variable and connect to the
service.Learn more in our docs here - https://docs.localops.co/micro-servicesExtend with cloud resources
You may want to create RDS database, S3 bucket, SNS topic or a SQS queue to fulfill business logic. We wanted to make this process easier so that you can write Terraform or Pulumi scripts to create these resources for each of your environments. So we have exposed VPC and subnet IDs that we provision and manage in your AWS account. You can use these IDs as variables in your IaC stack (Terraform/Pulumi/OpenTofu) to create any cloud resource you need in AWS.Read more in our blog post.Apr 25, 2025
Announcing Continuous deployments with Github Integration
You can now connect your Github repositories and deploy just by doing git-push.To get started, spin up a new environment and create a new Service. When you create a new service, configure a specific git repository and branch name. Whenever your team pushes new code to that branch, LocalOps will automatically pull the latest commit, build and push a new deployment to your environment. We will show build & deployment logs right there in the UI.You can create any number of services for any purpose:- Web service - Front end or back end
- Internal service
- Worker
- Job
- Cron job
Mar 5th, 2025
Introducing Projects
🎉 We are introducing “Projects” to organize environments by teams, environments, services, customers, regions, etc., You can create a project for:- Different module teams - Inference, Billing, Auth, Frontend, Backend, etc.,
- Production/staging environments - EU, US, AUS, etc.,
- Different customers - in the case of single tenant environments like Customer1-EU, Customer2-AWS, etc.,

Jan 27th, 2025
See CloudWatch metrics/logs in one place
When we provision environment, we provision a Grafana dashboard specific to the environment, accessible at monitor.<your-app-env-domain>
. You can access metrics and logs of all nodes, pods and containers running within the app
environment’s kubernetes cluster.If you are running your environment in AWS, you will also see a pre-configured CloudWatch data source in the
corresponding Grafana dashboard. You can access logs and metrics that are stored in CloudWatch.This is super useful if your application is using managed AWS services that are sending metrics/logs to CloudWatch.Go to “Monitor” tab in your environment to access the Grafana dashboard. Within Grafana, see Connections > Data sources
to see the Cloudwatch data source. Click on “Explore” next to it to see CloudWatch logs and metrics.Let us know what you think. Email us at help@localops.co.January 24th, 2025
Standard retention policy set on all logs and metrics
Every environment comes with its own unique instance of Loki + Prometheus + Grafana stack. From now, all logs and metrics recorded by the monitoring stack have 30-day retention policy. Any log statement/metric data point is automatically deleted after 30 days.This significantly reduces the risk of unintentional data leaks when developers print sensitive information on logs by accident.January 23rd, 2025
Access any AWS managed service using IAM Role
Your app container running inside environment can now access any AWS service using IAM roles. You don’t have to configure any IAM key/credential as secrets in your application.LocalOps sets up a default IAM role when your environment is provisioned in the target AWS account. We set things up in your environment such that your application will be able to use this IAM role implicitly via AWS SDK, if you declare theServiceAccount
in your Helm deployment yaml spec as per our documentation.After the environment is provisioned, you can manually add/remove relevant IAM policies to the app specific IAM role we
provisioned earlier.January 14th, 2025
Scale environments
