❝ Docker has revolutionized how we develop, ship, and run applications. For Node.js developers, containerization ensures consistency across environments, simplifies dependency management, and makes scaling effortless. Docker Compose takes it a step further by letting you define and run multi‑container applications (Node.js + database + cache) with a single command.❞
This guide walks you through the entire process: writing an optimized Dockerfile, building images, managing environment variables, orchestrating services with Docker Compose, and following best practices for production‑ready containers. You'll also learn how to handle persistent data, networking, and debugging common pitfalls.
We'll use a minimal Express app. Create a folder and add the following files:
{
"name": "docker-node-app",
"version": "1.0.0",
"description": "Simple Node.js app for Docker demo",
"main": "server.js",
"scripts": {
"start": "node server.js"
},
"dependencies": {
"express": "^4.18.2"
}
}
const express = require('express');
const app = express();
const port = process.env.PORT || 3000;
app.get('/', (req, res) => {
res.json({ message: 'Hello from Dockerized Node.js!' });
});
app.listen(port, () => {
console.log(`App listening on port ${port}`);
});
A good Dockerfile is multi‑staged, respects layers, and follows security practices. Let's break it down.
# Stage 1: Build (optional, if you need to compile)
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
# Stage 2: Runtime
FROM node:18-alpine
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
WORKDIR /app
COPY --from=builder --chown=nodejs:nodejs /app/node_modules ./node_modules
COPY --chown=nodejs:nodejs . .
USER nodejs
EXPOSE 3000
CMD ["node", "server.js"]
Key points:
node:18-alpine – small base image.npm ci – fast, deterministic installs.USER nodejs) – security best practice.COPY . . by using .dockerignore.# .dockerignore
node_modules
npm-debug.log
.git
.env
Dockerfile
.dockerignore
# Build the image
docker build -t node-app .
# Run the container
docker run -p 3000:3000 --name my-node-app node-app
Visit http://localhost:3000 to see the response. Use -d to run in detached mode.
Never hardcode secrets. Use environment variables and pass them to the container.
# Pass env at runtime
docker run -p 3000:3000 -e PORT=4000 -e DB_URL=mongodb://host:27017 node-app
For development, use an .env file (never commit to Git). Docker Compose can read it.
Real applications often need a database and a cache. Let's define a docker-compose.yml that includes Node.js, PostgreSQL, and Redis.
version: '3.8'
services:
app:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=development
- DB_HOST=postgres
- DB_USER=postgres
- DB_PASSWORD=secret
- DB_NAME=mydb
- REDIS_HOST=redis
depends_on:
- postgres
- redis
volumes:
- .:/app
- /app/node_modules
networks:
- app-network
postgres:
image: postgres:15-alpine
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: secret
POSTGRES_DB: mydb
ports:
- "5432:5432"
volumes:
- postgres-data:/var/lib/postgresql/data
networks:
- app-network
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis-data:/data
networks:
- app-network
volumes:
postgres-data:
redis-data:
networks:
app-network:
driver: bridge
Now your Node.js app can connect to postgres and redis by hostname. Use docker-compose up to start everything.
/app/node_modules to avoid overwriting container modules when mounting your local code.
Update your server.js to use environment variables for connections.
const express = require('express');
const { Pool } = require('pg');
const Redis = require('ioredis');
const app = express();
const port = process.env.PORT || 3000;
const pool = new Pool({
host: process.env.DB_HOST || 'localhost',
user: process.env.DB_USER || 'postgres',
password: process.env.DB_PASSWORD || 'secret',
database: process.env.DB_NAME || 'mydb',
});
const redis = new Redis({
host: process.env.REDIS_HOST || 'localhost',
});
app.get('/', async (req, res) => {
// Simple health check
res.json({ message: 'Connected to DB and Redis' });
});
app.listen(port, () => console.log(`App running on port ${port}`));
For development, you may want live reload and mount source code. For production, you want only the built image, no volumes, and use a reverse proxy.
docker-compose.override.yml (auto‑merged)
version: '3.8'
services:
app:
build:
context: .
target: builder # use development stage
volumes:
- .:/app
- /app/node_modules
environment:
- NODE_ENV=development
command: npm run dev
For production, create docker-compose.prod.yml without volumes and with a proper restart policy.
version: '3.8'
services:
app:
image: myregistry/node-app:latest
restart: always
ports:
- "3000:3000"
environment:
- NODE_ENV=production
depends_on:
- postgres
- redis
Docker can monitor container health. Add a healthcheck to your Dockerfile or compose service.
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node healthcheck.js || exit 1
In healthcheck.js you can test database connectivity.
Also, handle SIGTERM in Node.js to close connections gracefully:
process.on('SIGTERM', () => {
console.log('SIGTERM received, closing connections...');
server.close(() => {
console.log('Server closed');
process.exit(0);
});
});
latest.NODE_ENV=production – enables optimizations.docker scan or Snyk.restart: always.Check logs with docker logs <container>. Likely an uncaught exception or missing dependency.
Use service names as hostnames (e.g., postgres) – they are resolved by Docker's internal DNS. Also ensure services start in order with depends_on.
Stop other containers using the same host port or change the mapped port.
Ensure files are owned by the correct user (use chown in Dockerfile).
With Docker Compose, you can scale the app service to multiple instances (but they all share the same host port mapping – you need a load balancer).
docker-compose up --scale app=3 -d
For production, use a reverse proxy like Nginx or Traefik, or an orchestrator like Kubernetes or Docker Swarm.
Docker Compose automatically loads variables from a .env file in the same directory. This is great for keeping secrets out of version control.
# .env
DB_PASSWORD=supersecret
REDIS_PASSWORD=anothersecret
Reference them in docker-compose.yml with ${DB_PASSWORD}. Never commit .env.
Use BuildKit to cache dependencies efficiently. Add --mount=type=cache for npm.
# syntax=docker/dockerfile:1
FROM node:18-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm npm ci --only=production
FROM node:18-alpine
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
USER node
CMD ["node", "server.js"]
Containerizing your Node.js application with Docker and Docker Compose brings consistency, portability, and scalability to your development workflow. From local development with multiple services to production deployments, containers ensure that your application runs the same everywhere. Start with a simple Dockerfile, gradually introduce Compose for multi‑service setups, and adopt best practices for security and image size. The skills you gain will serve you well in the world of microservices, cloud-native architectures, and DevOps.
Happy containerizing — may your builds be fast and your images lean.