Your service(s) may want to store/access files in S3. This guide will show you steps to create and access s3 buckets from your services.

Declarative & Automatic using ops.json:

For your service, if you need one or more S3 buckets, you can add ops.json in the root directory of Github repo you’ve connected for the service.

Learn more about configuring dependencies using ops.json here.

And add S3 buckets as a dependency, like below.

{
  "dependencies": {
    "s3": {
      "buckets": [
        {
          "id": "test123",
          "prefix": "testdep123",
          "exports": {
            "MY_BUCKET_NAME1": "$name",
            "MY_BUCKET_ARN1": "$arn"
          }
        }
      ]
    }
  }
}

You can add as many buckets as you want.

Arguments you can add in each bucket object above:

  1. id - Alphanumeric string. Must be unique amongst the buckets you’ve declared above. Changing this string will replace the original bucket with new one.
  2. prefix - Alphanumeric string. Will be used as a prefix in the name of your bucket. Changing this string will replace the original bucket with new bucket.
  3. exports - Set of key value pairs. Keys are the ENVIRONMENT VARS we will pass to your code / containers. Values are the properties of the bucket provisioned. See below for available properties.
Changing id or prefix string will delete & replace the original bucket with new one.

Available properties of the bucket to use in exports:

  1. $name - Name of the bucket
  2. $arn - Amazon resource name of the bucket. Eg., arn:s3:..

Lifecycle:

If you have provided ops.json at the root of the git repository, it will be processed if a corresponding service in any of your active environment points at the same repository as source and when a deployment is triggered.

  • When the service is spinned up first time or when a new deployment is triggered, ops.json is parsed for processing. Resources declared in the dependencies object will be provisioned before your code starts to run.
  • Resources with same id are provisioned only once for the life of the service. And updated when there is a change in one of the properties above.
  • Keys in exports object will be passed as enviroment variables to your service.
  • When the service is deleted, the provisioned buckets are deleted from the cloud account immediately & automatically.

Access with no keys:

Buckets can be accessed from your code using AWS SDK. You don’t have to pass any AWS secret key to authenticate. Your containers automatically authenticate to have access to all the buckets you’ve declared in the above JSON.

LocalOps arranges Role based access by attaching an IAM role for all the services run within your environment. We provision a policy like below on that IAM role for every bucket provisioned.

{
  "Effect": "Allow",
  "Action": [
    "s3:GetObject",
    "s3:PutObject",
    "s3:DeleteObject",
    "s3:ListBucket"
  ],
  "Resource": [
    "arn:aws:s3:::your-bucket-name",
    "arn:aws:s3:::your-bucket-name/*"
  ]
}

Curious to know the IAM role that is used? Visit your environment’s main dashboard in LocalOps console to see the role.

Manual provisioning of new S3 buckets:

To proceed with this guide, you would need access to:

  1. AWS console with permissions to create s3 buckets and add policies to an existing IAM role.
  2. LocalOps console

1. Create S3 bucket

Login to AWS console and create new S3 bucket(s) in the region you want, and with appopriate unique name.

You can follow this official AWS guide to create s3 buckets.

2. Add permission to access S3 bucket

LocalOps creates a new IAM role in your AWS account for your services/containers to use. So that your code/containers can access any AWS resource without passing AWS_ACCESS_KEY or AWS_SECRET_KEY.

Navigate to LocalOps console and the corresponding environment. In overview section, you can find the APP ROLE whose name ends with app-role and that which you can copy and search in AWS IAM console > Roles section. Once you find the IAM role in IAM console, next step is to add a policy to the IAM role to give permissions for the role to access the S3 bucket your created.

Create a new IAM policy for the S3 bucket:

  1. Navigate to Policies in AWS IAM console
  2. Create a new IAM policy and give it a specific name
  3. Add the JSON below, as policy statement:
{
  "Effect": "Allow",
  "Action": [
    "s3:GetObject",
    "s3:PutObject",
    "s3:DeleteObject",
    "s3:ListBucket"
  ],
  "Resource": [
    "arn:aws:s3:::your-bucket-name",
    "arn:aws:s3:::your-bucket-name/*"
  ]
}

Replace your-bucket-name with your bucket name. 4. Press save.

Add the new IAM policy to the IAM role

Navigate back to the IAM role you saw earlier. In its Permissions tab, pick the option to “Add Policy”. In the policy picker, pick the one you created above and add it.

3. Add bucket details as secret

Last step is to tell your service about the new S3 bucket it can start to access. In LocalOps console, navigate the Service settings within the corresponding environment.

In secrets section, add a new key value pair like

- Key: `attachments-bucket`
- Value: `your-bucket-name`

Learn more about secrets here.

You can name the key in any way you want. In your code, you can access attachments-bucket environment variable to get the name of the bucket you created. This can be passed to the AWS SDK used in your code to read/write data. There is no need to provide credentials like access key since we are having role based access control here.

That’s it! 🎉