Documentation
Connections
- Cloud accounts
- Connect GitHub account
Environments
- What's inside?
- Create environment
- Extend using Terraform/Pulumi
- Monitoring
- Infrastructure Resources
- Shared responsibilities
- Delete environment
- Eject out of LocalOps
Services
Troubleshooting
Bring your helm charts (soon)
- First Steps
- Publish Helm Charts
- Tutorials
- Databases
- Database Migration
- Environment Variables
- Multiple Helm Charts
Support
Chat with us
Start a quick conversation with our team, from bottom right corner of this page.
Frequently Asked Questions
No. There is no lock-in at all. When you stop using LocalOps, environments are left as they are, in your cloud, for your team to start managing them. Check how to eject to learn more.
Yes. You are free to provision any database you need for your services. Just add their HOST, PORT, Username and Password as secrets within your Service settings. They will be passed down as environment variable for your containers.
Yes, if you have containerised your application and have a Dockerfile for it. However, if you use managed cloud services such as Amazon S3, you must provide alternate implementations to run in other cloud providers such as GCP or Azure.
There is no fool-proof technical way to hide/obfuscate your code. If your service is written in compiled languages,it helps. Otherwise we’d suggest you to address this legally in your Service agreement with your customer.
LocalOps tags all cloud infrastructure resources by two standard names. *-id
and *-name
. You can locate them in
Environment dashboard. Using these tags, you can filter cloud resources in AWS console, either in Cost management
console or Resource explorer and find out their cost. LocalOps will soon show a monthly cost estimator under each
environment’s details page.
LocalOps provisions environments on cloud accounts you connect. Account administrators of the respective cloud account have to pay the bills raised by cloud provider.
Was this page helpful?