Containerisation
Containerisation packages your application and all its dependencies — runtime, libraries, config — into a single portable image. That image runs identically on your laptop, in CI, in staging, and in production. There is no "works on my machine" problem because the machine is baked into the image.
The most widely used containerisation tool is Docker. Container images are described by a
Dockerfile, and multi-container setups are managed with Docker Compose.
Why Use Containers
- Consistency across environments — the same image runs in development, CI, and production, so environment-specific bugs disappear.
- Easy rollback — if a deployment breaks, rolling back means deploying the previous image tag. No migrations to reverse, no config to untangle.
- Painless system upgrades — need to upgrade PHP or Nginx? Change the base image, test locally, then ship. The upgrade is a code change, not a server operation.
- Isolation — each container has its own filesystem and process space. Multiple services can run on the same host without conflicting.
- Scalability — containers start fast and are cheap to run, which makes horizontal scaling straightforward.
Writing a Dockerfile
A Dockerfile is a sequence of instructions that builds your image layer by layer:
FROM php:8.3-fpm-alpine
WORKDIR /var/www/html
# Install system dependencies
RUN apk add --no-cache \
git \
curl \
libpng-dev \
oniguruma-dev \
libxml2-dev \
zip \
unzip
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
# Install Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Copy application files
COPY . .
# Install application dependencies
RUN composer install --no-dev --optimize-autoloader
# Set correct permissions
RUN chown -R www-data:www-data /var/www/html/storage /var/www/html/bootstrap/cache
EXPOSE 9000
CMD ["php-fpm"]
Key principles for a good Dockerfile:
- Start with a small base image — Alpine-based images (
-alpine) are much smaller than the default Debian ones. - Order layers by change frequency — system dependencies first, then Composer files, then application code. Docker caches layers, so putting frequently-changing files last avoids invalidating the cache on every build.
- Run as a non-root user — do not run your application as root inside the container.
- Keep secrets out — never copy
.envfiles containing production secrets into the image. Inject them at runtime via environment variables or a secrets manager.
Docker Compose for Local Development
For local development, Docker Compose lets you define and run a multi-container environment in a single file:
services:
app:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/var/www/html
depends_on:
- db
- redis
nginx:
image: nginx:alpine
ports:
- "8080:80"
volumes:
- .:/var/www/html
- ./docker/nginx.conf:/etc/nginx/conf.d/default.conf
depends_on:
- app
db:
image: mysql:8.0
environment:
MYSQL_DATABASE: app
MYSQL_USER: app
MYSQL_PASSWORD: secret
MYSQL_ROOT_PASSWORD: root
volumes:
- db_data:/var/lib/mysql
redis:
image: redis:alpine
volumes:
db_data:
Run docker compose up to start all services. The volumes mount on the app service means code changes are
reflected immediately without rebuilding the image — useful for development.
Tagging Images
Tag images with a meaningful version so you can deploy and roll back specific builds:
# Build and tag with the git commit SHA
docker build -t myapp:$(git rev-parse --short HEAD) .
# Tag the same image as latest
docker tag myapp:abc1234 myapp:latest
Using git SHAs as tags ties every deployed image back to a specific commit in your repository. This makes debugging production issues considerably easier.
Running in Production
In production, avoid docker compose — it is a development tool. Use an orchestrator:
- Docker Swarm — built into Docker, simpler to operate, suitable for smaller deployments.
- Kubernetes — the industry standard for large-scale container orchestration. Steeper learning curve, significantly more capability.
- Managed container services — AWS ECS, Google Cloud Run, and similar platforms handle orchestration for you. Often the best starting point.
For small to medium applications, a managed service like Cloud Run or ECS Fargate removes most of the operational overhead of running containers in production.
Resource Limits
Always set resource limits on your containers. Without them, a single runaway container can starve other services on the same host:
services:
app:
deploy:
resources:
limits:
cpus: "1.0"
memory: 512M
reservations:
cpus: "0.25"
memory: 128M
Set limits conservatively at first and adjust based on observed usage in production.
Security Considerations
- Use official or verified base images. Unverified images from public registries may contain malware.
- Scan images for vulnerabilities before deploying. Tools like
docker scoutor Trivy can automate this in CI. - Do not run containers as root. Create a dedicated user in your Dockerfile.
- Keep base images up to date. Security patches in the base OS layer require rebuilding and redeploying your image.
- Treat container images as immutable. Do not SSH into a running container and make changes — rebuild and redeploy instead.
Related
- CI/CD — automated pipelines that build, test, and push container images on every commit
- Infrastructure as Code — managing the infrastructure that runs your containers as version-controlled config
- Blue/Green Deployment — a deployment strategy that pairs well with containerised applications