Claude-Dockerizing a NestJS API with Multi-Environment Support
I have nestjs api backend node application that I want to dockerize it. I will use docker compose for the ease of management. I will have atleast 3 environments: dev, test and prod which I want to support running those docker instances in my EC2 machine running Amazon Linux simultaneously. There will be separate env input files such as env.dev, env.test which defines the environment values. 1. Provide me single Dockerfile supporting multiple environments using NODE_ENV environment variable. 2. Also provide me single docker compose yml file supporting multiple environments by treating each environment as a separate project env. 3. Also Provide me single helper script with minimal util functions mainly for ease of management of build and deploy using primarily docker compose commands. This script can be used to source from any other custom scripts.
I'll help you create a Docker setup for your NestJS application that supports multiple environments.
Multi-Environment Dockerfile
Click to open code
Tap to open
Multi-Environment Docker Compose
Click to open code
Tap to open
Docker Helper Script
Click to open code
Tap to open
Let me explain each component:
To use this setup:
Save the files in your project root:
Dockerfile
docker-compose.yml
docker-helper.sh
Create environment files:
env.dev
env.test
env.prod
Make the helper script executable:
chmod +x docker-helper.sh
Use the helper script:
# Build and start dev environment
./docker-helper.sh build dev
./docker-helper.sh start dev
# View logs
./docker-helper.sh logs dev
# Check status of all environments
./docker-helper.sh status
You can also source the helper script in other scripts:
source ./docker-helper.sh
build_app "dev"
start_app "dev"
Would you like me to explain any part in more detail or make any modifications to the configuration?
I want you to update the docker-compose.yml file: 1. Remove version as it is deprecated; 2. I don't need x-app-common; 3. Define single service called mf-api -- the image, container, args, ports, environment will be dependent on dynamic environment values; 4. There are only couple of build stage env variables and there are more than 10 run time specific environment variables. Only for few of them, I may provide default values and for others, I may just list them in ARGS and in environment sections without specifying any defaults -- This assumes that the docker compose would automatically pass the env values of the listed ones and pass it as args when invoking the docker instance. Please clarify.
I'll update the docker-compose.yml file to match your requirements.
Updated Docker Compose with Dynamic Environment
Click to open code
Tap to open
Let me explain the key changes and how environment variables work in this setup:
version
field as it's no longer needed in recent Docker Compose versions.mf-api
${NODE_ENV}
for environment-specific namingVARIABLE: ${VARIABLE:-default_value}
- VARIABLE
env_file
environment
without defaults MUST be:
To use this setup:
# Example 1: Running with minimal configuration
NODE_ENV=dev docker compose up
# Example 2: Running with specific variables
export DATABASE_URL=postgresql://user:pass@host:5432/db
export REDIS_URL=redis://localhost:6379
NODE_ENV=prod docker compose up
# Example 3: Running with a different port
NODE_ENV=test API_PORT=3001 docker compose up
You can also create a .env
file in your project root to provide default values:
DATABASE_URL=postgresql://user:pass@host:5432/db
REDIS_URL=redis://localhost:6379
AWS_REGION=us-east-1
# ... other variables
Would you like me to:
within build:args, for example, we have API_VERSION without any default values. Would docker compose will pickup $API_VERSION from the shell environment and pass it or is there a specific syntax to pass these values using docker compose build ?
Let me clarify how build arguments work in Docker Compose, particularly for arguments without default values.
There are two ways to handle build arguments in Docker Compose:
build:
args:
API_VERSION: ${API_VERSION}
build:
args:
- API_VERSION
The key difference is:
- API_VERSION
) does NOT automatically pick up values from the shell environment during buildAPI_VERSION: ${API_VERSION}
) WILL pick up values from the shell environmentLet me update the docker-compose.yml to use the correct syntax:
Docker Compose with Correct Build Args Syntax
Click to open code
Tap to open
To build using these build arguments, you would:
# Method 1: Export variables before building
export API_VERSION=1.2.3
export APP_BUILD_CONFIG=prod
docker compose build
# Method 2: Specify variables in command line
docker compose build --build-arg API_VERSION=1.2.3 --build-arg APP_BUILD_CONFIG=prod
# Method 3: Use .env file with the build arguments defined
# .env file:
# API_VERSION=1.2.3
# APP_BUILD_CONFIG=prod
docker compose build
I've also updated the environment variables section to use the same reference syntax for consistency. This makes it clearer what variables are being passed and allows for better error handling if required variables are missing.
Would you like me to:
Okay, however the treatment inside environment: section is different; If I have REDIS_URL without specifying value OR REDIS_URL:
Yes, you're absolutely correct! When it comes to runtime environment variables (under environment:
), both syntaxes work the same way:
environment:
- REDIS_URL # Simple list syntax
is functionally equivalent to:
environment:
REDIS_URL: ${REDIS_URL} # Explicit reference syntax
Both will:
Let me update the docker-compose.yml to use the simpler list syntax for runtime environment variables:
Docker Compose with Simplified Environment Variables
Click to open code
Tap to open
The key differences in this updated version:
ARG: ${ARG}
syntax because they need explicit referenceVAR: ${VAR:-default}
syntax- VAR
list syntaxThis makes the configuration cleaner and more maintainable while achieving the same functionality. Would you like me to make any other adjustments to the configuration?
I need a minimal helper script to manage with docker compose build, up and down operations. Unless we use env specific projects (with -p option), docker compose up terminates other running instances belonging to other environment. So I need a shell function like api_manage up/down/destroy/status/logs which will internally make use of NODE_ENV and corresponding project with docker compose operations.
Docker Compose Helper Script
Click to open code
Tap to open
To use this script:
api-helper.sh
and make it executable:chmod +x api-helper.sh
source api-helper.sh
# Start dev environment
NODE_ENV=dev api_manage up
# Start test environment (runs alongside dev)
NODE_ENV=test api_manage up
# Check status of prod environment
NODE_ENV=prod api_manage status
# View logs for test environment
NODE_ENV=test api_manage logs
# Bring down dev environment only
NODE_ENV=dev api_manage down
# Completely remove test environment
NODE_ENV=test api_manage destroy
Key features:
-p
option) to isolate environmentsWould you like me to:
Error response from daemon: Invalid container name (mfapi:local), only [a-zA-Z0-9][a-zA-Z0-9.-] are allowed
Ah, this error occurs because Docker has strict naming conventions for containers. Let me update the docker-compose.yml and helper script to ensure we use only allowed characters (replacing underscores with hyphens):
Docker Compose with Valid Container Names
Click to open code
Tap to open
Updated Docker Helper Script
Click to open code
Tap to open
Key changes made:
mf-api.${NODE_ENV}
)tr '_' '-'
Now you can use it the same way:
NODE_ENV=dev api_manage up
NODE_ENV=test api_manage up
NODE_ENV=prod api_manage status
The script will automatically convert any invalid characters to valid ones, ensuring Docker accepts the names. Let me know if you'd like any adjustments to this solution!
I need to choose proper docker base image for build stage; I am using node:22-slim as per latest recommendations, but got prisma error: #10 0.696 Please manually install OpenSSL via apt-get update -y && apt-get install -y openssl
and try installing Prisma again. If you're running Prisma on Docker, add this command to your Dockerfile, or switch to an image that already has OpenSSL installed. I want fairly recent one for base image for builder; Please suggest;
For a Node.js application using Prisma, you'll want a base image that includes the necessary system dependencies. Here are some good options:
node:20-bookworm-slim
(recommended):FROM node:20-bookworm-slim AS builder
# Install OpenSSL and other required dependencies
RUN apt-get update -y && \
apt-get install -y openssl pkg-config libssl-dev && \
rm -rf /var/lib/apt/lists/*
node:20
(includes more dependencies but larger size):FROM node:20 AS builder
# No need to install additional dependencies
node:20-slim
with minimal dependencies:FROM node:20-slim AS builder
# Install only OpenSSL
RUN apt-get update -y && \
apt-get install -y openssl && \
rm -rf /var/lib/apt/lists/*
I recommend going with option 1 (node:20-bookworm-slim
) because:
pkg-config
and libssl-dev
helps avoid other potential dependency issuesLet me update the full Dockerfile with this recommendation:
Updated Dockerfile with Prisma Support
Click to open code
Tap to open
Key changes:
node:20-bookworm-slim
Would you like me to:
Error: Cannot find module '/app/node dist/main'
That error occurs because there's an extra space in the CMD instruction. Let me fix the Dockerfile:
Fixed Dockerfile with Correct CMD
Click to open code
Tap to open
The key fix was in the last line:
CMD ["node", "node dist/main"]
or CMD ["node dist/main"]
CMD ["node", "dist/main"]
The CMD instruction needs to be an array of strings where:
You should now be able to run the container without the module not found error. Would you like me to add any additional changes or explain more about Dockerfile CMD syntax?