in Laravel

Laravel Development With Docker

In this tutorial we're going to learn how we can setup Docker containers for development with a Laravel application.

If you've used Laravel before you'll most likely be familiar with Homestead, which is a Laravel package to create a local development environment using Vagrant

Laravel Homestead

This is fine for a single web application but when you start getting more moving parts with your application then Docker can be a better choice for you.

What Is A Docker Container?

A Docker container is a runtime instance of a Docker image.

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.

What Is A Docker Container

What Is A Docker Image?

Docker images are snapshots of Docker containers. Images are created by running the docker build command. Docker images can be stored locally to reuse on other applications, or they can be pushed to a remote registry and then can be pulled onto remote machines.

Docker images are essential your entire system and therefore can be very large. Docker will split the image into layers to reduce it's footprint, allowing Docker to locally cache sections of the system.

Why Use Docker?

  1. Docker in development allows you to keep a consistent environment across the entire team.
  2. Sharing Docker images means the user doesn't need to understand how the application is setup they can just run docker up and this will run the Dockerfile made by the maintainer.
  3. Docker allows you to keep your local machine clean. If you are switching between different projects you don't have to worry about having different version of PHP installed. You can just use a different PHP container to run your code.
  4. Works with DevOps processes. Docker can be installed on production to guarantee the same environment is used for development and production. If DevOps doesn't want to use Docker in production they can use yout Dockerfiles to understand how the application should work.

What Do We Need To Install?

The infrastructure we need to run a Laravel application is a web server, php, a database and queue worker.

In this tutorial we're going to use Nginx, PHP, MySQL database, Redis for caching and Redis for the queue worker.

Building Our Containers

The containers we're going to create to run our application are

  • Nginx to host our website
  • PHP to run the Laravel code
  • MySQL database
  • Redis for cache
  • Run Laravel scheduled task with a cron job scheduler
  • Mailhog to view emails sent from your application locally

To use our Docker containers we're going to keep this under one application by using a docker-compose file.

Nginx Container

This is used as the web host for our application, this will need to have our vhost setup correctly and a copy of our local website inside the container.

To create an Nginx container we can use the default nginx image on the docker hub. To use the most streamlined image we can use the alpine version.

nginx:
  image: nginx:alpine

We will need to copy this project into the container and a default vhost into Nginx.

volumes:
  - "./docker/nginx/sites/default.conf:/etc/nginx/conf.d/default.conf"
  - "./:/var/www/html"

We will need to expose the nginx port 80 and 433 on the container, to do this we need to define the ports on the container.

ports:
  - "8080:80"
  - "433:433"

Then we are going to make sure the other containers can interact with this container by adding it to a network.

networks:
  - code-network

Laravel PHP Container

The PHP container will be used to contain the PHP that will be used to run the PHP code of our Laravel application. We're going to name this container php-fpm which we're going to use the php:7.2-fpm docker image to build the container.

But we're going to extend the build process of this image and install our own extentions. We do this by creating our own Dockerfile to run during the build process.

build:
  context: ./docker/laravel

To let the PHP container see our code we need to make sure that it has access to root application by adding a volume to the container.

volumes:
  - ./:/var/www/html

Then for the rest of containers to interact with this PHP container we add it into the network.

networks:
  - code-network

The final docker-compose config will look like this.

laravel:
  container_name: laravel
  build:
    context: ./docker/laravel
  restart: always
  volumes:
    - ./:/var/www/html
  networks:
    - code-network

Laravel Dockerfile

When you want to expand on the default docker image you need to create your own Dockerfile file and give the commands you want to run inside the container.

First you define which image you want to expand from FROM php:7.2-fpm you can find out these images from the docker hub.

Then you can install all the PHP extensions you will need to run your application.

Inside this container we're also installing composer so that we can install third party packages into our application.

Below is the complete dockerfile we can use for this container.

FROM php:7.2-fpm-alpine

# Update packages and install composer and PHP dependencies.
RUN docker-php-ext-install pdo_mysql exif bcmath

# Memory Limit
RUN echo "memory_limit=2048M" > $PHP_INI_DIR/conf.d/memory-limit.ini
RUN echo "max_execution_time=900" >> $PHP_INI_DIR/conf.d/memory-limit.ini
RUN echo "post_max_size=20M" >> $PHP_INI_DIR/conf.d/memory-limit.ini
RUN echo "upload_max_filesize=20M" >> $PHP_INI_DIR/conf.d/memory-limit.ini

# Time Zone
#RUN echo "date.timezone=${PHP_TIMEZONE:-UTC}" > $PHP_INI_DIR/conf.d/date_timezone.ini

# Display errors in stderr
RUN echo "display_errors=stderr" > $PHP_INI_DIR/conf.d/display-errors.ini

# Disable PathInfo
RUN echo "cgi.fix_pathinfo=0" > $PHP_INI_DIR/conf.d/path-info.ini

# Disable expose PHP
RUN echo "expose_php=0" > $PHP_INI_DIR/conf.d/path-info.ini

# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer

# install supervisor
ADD ./supervisor/conf.d/redis-worker.conf /etc/supervisor/conf.d/
WORKDIR /var/www/html

Supervisor Conf

With supervisor installed we can use supervisor to run the redis queue worker with the following config file.

command=php /var/www/html/artisan queue:work redis --sleep=10 --daemon --quiet --queue="default"

process_name=%(program_name)s_%(process_num)02d
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=forge
numprocs=1
stdout_logfile=/var/www/html/storage/logs/redis-worker.log

MySQL Database Container

For the database of our application we're going to use MySQL, we can use the default mysql 5.7 image and pass in variables from our .env file.

Docker has access to your environment variables stored in the .env file and you can use them by simply placing a $ symbol in front of them like $DB_DATABASE, and we can use this to setup the environment for the MySQL container.

environment:
  MYSQL_DATABASE: $DB_DATABASE
  MYSQL_USER: $DB_USERNAME
  MYSQL_PASSWORD: $DB_PASSWORD
  MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD

When MySQL stores the data it will store files on the container for the table structure and the stored data from within the container, this is how MYSQL works. The problem is if you need to delete your containers for whatever reason you will lose the data inside the container and will have to rebuild the database each time. To get around this you can assign a volume for your container so you store the data inside your project and the container will use these files to store the data.

MySQL will store it's files inside a /var/lib/mysql folder so we can create a new volume like this.

volumes:
  - ./storage/data/mysql:/var/lib/mysql

To allow other containers to access this you need to add it to part of the network.

networks:
  - code-network

The full docker-compose config for mysql is below.

mysql:
  image: mysql:5.7
  container_name: mysql
  ports:
    - "3306:3306"
  restart: always
  environment:
    MYSQL_DATABASE: $DB_DATABASE
    MYSQL_USER: $DB_USERNAME
    MYSQL_PASSWORD: $DB_PASSWORD
    MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
  volumes:
    - ./storage/data/mysql:/var/lib/mysql
  networks:
    - code-network

You can now access this container from an GUI such as Sequel Pro by using the 127.0.0.1 IP address and port 3306. In your .env you can now set the database host to the name of the container and docker will be able to access MySQL from within the docker network.

DB_HOST=mysql

Redis Cache Container

For the cache in our application we're going to use Redis. To use this in Laravel it's very easy you simply change the cache driver to redis.

CACHE_DRIVER=redis

To get a redis container up and running you can use the following docker config

redis:
  image: redis:alpine
  container_name: redis
  ports:
    - "6379:6379"
  networks:
    - code-network

You will now be able to connect to this to use redis for caching. In your .env file you may have something already setup like.

REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379

If so when you try to use your application you will get a connection refused error.

Warning: Redis::connect(): connect() failed: Connection refused

This is because the PHP container can not talk to the redis container on localhost as there is no redis on it's localhost. You'll be able to connect to 127.0.0.1:6379 from your computer but not from another container, this is because of the networking of docker. To allow docker networked containers to access redis you will need to change your REDIS_HOST to just redis.

REDIS_HOST=redis
REDIS_PASSWORD=null
REDIS_PORT=6379

Docker will use the redis container name to have an internal IP to be able to connect to the container.

Mailhog

Mailhog is an application that will intercept outgoing smtp requests and stop them from going out but displays the email in an interface you can use.

This tool is very important to use in local environments when testing emails as they won't accidentally be sent to your users and you can see what would be sent to them in production.

Looking on the docker hib there is an image of mailhog/mailhog we can use this simply install the tool. Then we can add the following to our docker-compose.

mailhog:
  image: mailhog/mailhog
  ports:
    - "1025:1025"
    - "8025:8025"
  networks:
    - code-network

Working With Docker Containers

These are all the containers we need to get a Laravel application up and running we can now look into how we perform normal development tasks within the containers, such as running composer, npm, artisan commands and running our phpunit tests.

To build the containers we setup in the above code we need to run the command.

docker-compose up -d

This will setup the containers outlined above and we should be able to navigate to 127.0.0.1:8080 to view our application.

Running Composer Install

In order to run composer we need to run a command inside the PHP container. We do this by running the docker-compose exec command to run a command inside the container.

First we need to start the bash application on the php container.

docker-compose exec php-fpm bash

This will log us into the container where we can run the composer command

composer install

Running Artisan Commands

Just like the composer command we can run the artisan commands inside the container by first loggin into the php container.

docker-compose exec php-fpm bash

Then you can run the artisan commands like you normally will.

php artisan cache:clear
php artisan route:cache
php artisan migrate

Running PHPUnit Tests

To run the PHPunit tests you will need to run the phpunit command inside the PHP container.

docker-compose exec php-fpm bash

Then run

vendor/bin/phpunit

XDebug Unit Tests

If you're like me you like to be able to step through your unit tests and see exactly whats going on. For this we need to install xdebug onto our PHP container, then we'll be able to use our IDE to step through the code.

The easiest way fo install xdebug is to use the pecl command pecl install xdebug-2.6.0.

Therefore you can use this inside your PHP container Dockerfile by using the above command and then enable the xdebug extension.

Add the following to your PHP container.

RUN pecl install xdebug-2.6.0 \
    && docker-php-ext-enable xdebug

When you rebuild the container and login to the container you'll be able to run php -v and it will say that xdebug is installed.

Complete docker-compose File

version: '3'

services:
  nginx:
    image: nginx:alpine
    volumes:
      - "./docker/nginx/sites/default.conf:/etc/nginx/conf.d/default.conf"
      - "./:/var/www/html"
    ports:
      - 8080:80
      - "433:433"
    container_name: nginx
    restart: always
    environment:
      - NGINX_HOST=paulund.test
    networks:
      - code-network

  laravel:
    container_name: laravel
    build:
      context: ./docker/laravel
    restart: always
    volumes:
      - ./:/var/www/html
    networks:
      - code-network

  mysql:
    image: mysql:5.7
    container_name: mysql
    ports:
      - "3306:3306"
    restart: always
    environment:
      MYSQL_DATABASE: $DB_DATABASE
      MYSQL_USER: $DB_USERNAME
      MYSQL_PASSWORD: $DB_PASSWORD
      MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
    volumes:
      - ./storage/data/mysql:/var/lib/mysql
    networks:
      - code-network

  redis:
    image: redis:alpine
    container_name: redis
    ports:
      - "6379:6379"
    networks:
      - code-network

  mailhog:
    image: mailhog/mailhog
    ports:
      - "1025:1025"
      - "8025:8025"
    networks:
      - code-network

networks:
  code-network:
    driver: bridge

What's Next?

Now that we have our inital environment for laravel we can continue to create new scripts that's will automatically run common commands for us.

Other things we can do at a later date are:

  • Create a local script for shortcuts into the container.
  • Create a npm container to run npm run watch so that the assets are always rebuilt without us doing anything.
  • Add Jenkins into a container to manage build tasks
  • Allow us to reuse this application with a local docker image.

Subscribe To The Weekly Newsletter

Get weekly updates to your email