Dockerizing your app ensures consistency, portability, isolation, and several other benefits. If you haven’t installed
Docker already, you can find the installation instructions for various operating systems in the
official docs.
Let’s create a simple Node.js script that runs an HTTP server on port 3000 and when the URL / is hit, it prints
Hello World!
app.js
const express = require('express');const app = express();const port = process.env.PORT ?? 3000;app.get('/', (req, res) => { res.send('Hello World!');});app.listen(port, () => { console.log(`Example app listening on port ${port}`);});
We can run this app by running the following command:
Now, let’s create a Dockerfile for the above app. A Dockerfile is a set of instructions that tells Docker how to get
our app working in our environment.
Dockerfile
# Use the official Node.js image with version 20 based on Alpine Linux# Alpine images are smaller in size hence preferredFROM node:20-alpine# Set environment variable for the application's port# this is used in app.jsENV PORT=3000# Set env to production to tell app and other tools that the app is running in production mode# will be useful to disable debugging logs, skip few things etc.,ENV NODE_ENV=production# Set the working directory inside the container to /app# this will be the root context of our app,# we will be copying and running script from inside this folderWORKDIR /app# Copy package.json and package-lock.json to the working directoryCOPY package.json package-lock.json* ./# Install dependencies specified in package-lock.json# using npm ci for a clean installRUN npm ci# Copy the rest of the application files to the working directory# while copying this respects the contents given in dockerignoreCOPY . .# Expose the application port to be accessible from outside the containerEXPOSE ${PORT}# Start the applicationCMD ["node", "app.js"]
As you can see, the Dockerfile is a set of instructions that automate the steps you would take manually to get the app
running on your machine.
Though Docker supports Multi-Stage Builds, we won’t be using that
here since Node.js needs all the source code and node_modules to be present while running the application. Prefer to
opt-in for multistage builds whenever possible.
In real-world scenarios, you might have more than just an HTTP server. You might have other services running
parallelly too, like schedulers, workers, notification services, gateways, etc,. You’ll need to create a Dockerfile for
each and link them while creating the Helm charts.
You might also choose to run everything in the same container, which is mostly not recommended, since you might lose
isolation, scalability, etc.
It is heavily recommended to have a .dockerignore file to ignore certain prebuilt binaries and non-project-related
stuff being copied from our local to the Docker container.
It is essential to ignore folders like node_modules since when installing dependencies, few libraries might build
binaries depending on your platform. We don’t want them to be copied since it might replace the binaries built inside
the container itself while running npm ci, causing the application to break or misbehave.
You can put any content in here, including other components, like code:
This Dockerfile uses a multistage build to compile a Go application in a Golang builder image and then runs it in a
lightweight Alpine image, ensuring a minimal final image size.
Dockerfile
# Stage 1: Build the Go applicationFROM golang:1.18 as builder# Set the working directory inside the containerWORKDIR /app# Copy the Go module files and download dependenciesCOPY go.mod go.sum ./RUN go mod download# Copy the rest of the application codeCOPY . .# Build the Go applicationRUN go build -o main .# Stage 2: Create a lightweight container to run the Go applicationFROM alpine:latest# Set the working directory inside the containerWORKDIR /root/# Copy the Go binary from the builder stageCOPY --from=builder /app/main .# Expose the application portEXPOSE 8080# Command to run the Go applicationCMD ["./main"]
A .dockerignore file for reference:
# Ignore Go build artifactsmain*.o*.a*.so*.out# Ignore other unnecessary files.git.gitignoreDockerfiledocker-compose.yml
This Dockerfile installs Django dependencies in one stage and then copies the dependencies and application code to a
slim Python image, optimizing the final container size and performance.
Dockerfile
# Stage 1: Install dependenciesFROM python:3.10-slim as builder# Set the working directory inside the containerWORKDIR /app# Copy the requirements file and install dependenciesCOPY requirements.txt .RUN pip install --no-cache-dir -r requirements.txt# Copy the rest of the application codeCOPY . .# Stage 2: Create a lightweight container to run the Django applicationFROM python:3.10-slim# Set the working directory inside the containerWORKDIR /app# Copy the installed dependencies and application code from the builder stageCOPY --from=builder /app /app# Expose the application portEXPOSE 8000# Command to run the Django applicationCMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
A .dockerignore file for reference:
# Ignore Python bytecode files*.pyc*.pyo__pycache__# Ignore other unnecessary files.git.gitignoreDockerfiledocker-compose.yml# Ignore local Django settingslocal_settings.pydb.sqlite3
This Dockerfile leverages a multistage build to install Ruby and system dependencies, precompile assets in a Ruby
builder image, and then run the Rails application in a slimmer Ruby image, reducing the overall image size.
title='Dockerfile'
# Stage 1: Build the Rails applicationFROM ruby:3.1 as builder# Set the working directory inside the containerWORKDIR /app# Copy the Gemfile and Gemfile.lock and install dependenciesCOPY Gemfile Gemfile.lock ./RUN bundle install# Copy the rest of the application codeCOPY . .# Precompile the Rails assetsRUN bundle exec rake assets:precompile# Stage 2: Create a lightweight container to run the Rails applicationFROM ruby:3.1-slim# Set the working directory inside the containerWORKDIR /app# Copy the installed dependencies and application code from the builder stageCOPY --from=builder /app /app# Install system dependencies for RailsRUN apt-get update && apt-get install -y nodejs# Expose the application portEXPOSE 3000# Command to run the Rails applicationCMD ["rails", "server", "-b", "0.0.0.0"]
A .dockerignore file for reference:
# Ignore Ruby build artifacts*.gem*.rbc# Ignore other unnecessary files.git.gitignoreDockerfiledocker-compose.yml# Ignore local Rails settingsconfig/database.ymldb/*.sqlite3log/*tmp/*
All these Dockerfiles are references to how you might package the application; actual steps might vary depending on
the application you build and the features you use.
This command builds the node-app image for platform linux/amd64 and tags it as latest. If you are locally testing
your application, you can skip the platform key to build the images
# only for testing in local machinedocker build -t node-app:latest .
--rm: to tell the Docker Daemon to clean up the container and remove the file system after the container exits
--name node-app: Name of the container node-app.
-e PORT=3000: Sets the environment variable PORT in docker 3000.
-d: Runs the container in detached(background) mode. You can skip the flag to see the logs directly in your terminal
window.
-p 3000:3000: Maps port 3000 on your host to port 3000 in the container.
node-app in the end is the name of the image
This command runs your Docker image, mapping port 3000 of the container to port 3000 on your local machine. Once the
service is up, visit http://localhost:3000 to see the app we built running.
Hurray 🎉. Now we have package our app for production use using docker.