How to Run a Laravel Application Locally with Docker

How to Run a Laravel Application Locally with Docker

A solution to a problem with too many projects on a developer’s computer.

In most web-dev projects, programmers create a local copy of the application they are working on to be able to quickly test changes without having to update the files on the server. Often, however, it turns out that different projects require different dependencies and even different versions of the same libraries. By changing the environment for one project, we change the configuration for all other projects—it is therefore difficult to maintain all running projects, which is extremely burdensome when we often switch between them. In such a configuration, we also don’t have the possibility to create a local environment for each application that would reflect its production environment.

To the rescue come Docker-like tools, which containerize—they enclose multiple application environments in their own containers. Thanks to this, each application works independently in its own environment.

Laravel & Docker

In this article, we’ll discuss how to create containers for Laravel applications. Each application will have a host assigned, and the configuration will be flexible enough to be easily adapted to various projects.

What We’ll Need

If you do not have Docker on your computer yet, it’s a good time to install it. We’ll need two tools: docker and docker-compose.

Description of the installation of both tools on Ubuntu 18.04/16.04 can be found here.

docker-compose.yml & .env

We’ll begin our configuration by creating a new folder named laravel-docker and the following files in it:



Our docker-compose.yml file will look like this:

version: '3'



       image: mysql:5.7

       container_name: ${APP_NAME}_mysql

       restart: always




           MYSQL_USER: ${MYSQL_USER}



           - ${PORT_DATABASE}:3306


           - "./data/db/mysql:/var/lib/mysql"

           - "./etc/mysql:/etc/mysql/conf.d"


       image: schickling/mailcatcher

       container_name: ${APP_NAME}_mailcatcher


           - ${PORT_MAILCATCHER}:1080


       container_name: ${APP_NAME}_redis

       image: redis


         - "${PORT_REDIS}:6379"


         - "./data/redis:/data"

       restart: always


       image: phpmyadmin/phpmyadmin

       container_name: ${APP_NAME}_phpmyadmin


           - ${PORT_PHPMYADMIN}:80

       restart: always


           - database:db


           - database



         context: ./etc/php







           - INSTALL_GD=${INSTALL_GD}



       container_name: ${APP_NAME}_php

       entrypoint: sh /bin/ php-fpm


           - database:mysqldb

       restart: always


           - "./etc/php/php.ini:/usr/local/etc/php/conf.d/php.ini"

           - ${APP_PATH}:/var/www/html

           - './etc/log/nginx:/var/log/nginx'

           - ./etc/php/


       build: ./etc/nginx

       container_name: ${APP_NAME}_nginx


           - ${PORT_HTTP}:80

           - ${PORT_HTTPS}:443

       restart: always


           - "./etc/nginx/nginx.conf:/etc/nginx/nginx.conf"

           - "./etc/nginx/app.conf:/etc/nginx/sites-available/application.conf"

           - "./etc/nginx/app.conf:/etc/nginx/sites-enabled/application"

           - "./etc/ssl:/etc/ssl"

           - './etc/log/nginx:/var/log/nginx'

           - ${APP_PATH}:/var/www/html


           - php

           - database

The configuration contains:

  • a MySQL database
  • Mailcatcher—a tool for storing emails sent from the app
  • Redis
  • phpMyAdmin
  • PHP
  • Nginx

The configuration has a lot of variables marked ${VARIABLE_NAME}. The values ​​of these variables will be set in the .env file. Its contents will be as follows:

#!/usr/bin/env bash








# App settings




# PHP Image settings










# Port Mappings






We recommend adding an .env file to .gitignore, and adding an .env.example file to the repository, which doesn’t contain any sensitive information, e.g., passwords.

As you can see, the configuration will allow us to enable/disable various options from the .env file, so if our project will need, for example, a postgres database instead of mysql, we can easily change it using the INSTALL_MYSQL and INSTALL_POSTGRESQL options.

The .env file also allows you to configure the ports on which the services will operate, so if, for example, a database port 3300 is busy, we can change it to another free one.

Configurations in the Catalog ./etc/

The next step will be to create the configuration of individual containers in the ./etc/ subdirectory. In the beginning, we need configuration for nginx and php.

Let’s create the ./etc/nginx directory and three files inside it: Dockerfile, nginx.conf, and app.conf.

Contents of the file ./etc/nginx/Dockerfile:


FROM debian:jessie


RUN printf "deb jessie main\ndeb-src jessie main\ndeb jessie/updates main\ndeb-src jessie/updates main" > /etc/apt/sources.list


RUN apt-get update && apt-get install -y \



RUN rm /etc/nginx/sites-enabled/default


RUN echo "upstream php-upstream { server php:9000; }" > /etc/nginx/conf.d/upstream.conf


RUN usermod -u 1000 www-data


CMD ["nginx"]





Contents of the file ./etc/nginx/nginx.conf:


user www-data;

worker_processes 4;

pid /run/;


events {

 worker_connections  2048;

 multi_accept on;

 use epoll;



http {

 server_tokens off;

 sendfile on;

 tcp_nopush on;

 tcp_nodelay on;

 keepalive_timeout 15;

 types_hash_max_size 2048;

 include /etc/nginx/mime.types;

 default_type application/octet-stream;

 access_log off;

 error_log off;

 gzip on;

 gzip_disable "msie6";

 include /etc/nginx/conf.d/*.conf;

 include /etc/nginx/sites-available/*;

 open_file_cache max=100;

 client_max_body_size 12M;



daemon off;


Contents of the file ./etc/nginx/app.conf:


server {

   server_name myapp.test;


   root /var/www/html/public;


   location / {

       try_files $uri /index.php?$args;



   location ~ \.php$ {

       fastcgi_split_path_info ^(.+\.php)(/.+)$;

       fastcgi_pass php-upstream;

       fastcgi_index index.php;

       include fastcgi_params;

       fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

       fastcgi_param PATH_INFO $fastcgi_path_info;



   error_log /var/log/nginx/laravel_error.log;

   access_log /var/log/nginx/laravel_access.log;



# server {

#     server_name myapp.test;


#     listen 443 ssl;

#     fastcgi_param HTTPS on;


#     ssl_certificate /etc/ssl/server.pem;

#     ssl_certificate_key /etc/ssl/server.key;

#     ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;


#     root /var/www/html/public;


#     location / {

#         try_files $uri /index.php?$query_string;

#     }


#     location ~ \.php$ {

#         fastcgi_split_path_info ^(.+\.php)(/.+)$;

#         fastcgi_pass php-upstream;

#         fastcgi_index index.php;

#         include fastcgi_params;

#         fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

#         fastcgi_param PATH_INFO $fastcgi_path_info;

#     }


#     error_log /var/log/nginx/laravel_error.log;

#     access_log /var/log/nginx/laravel_access.log;

# }

The last file contains a lot of commented lines: they’ll be useful if you want to install an SSL certificate in the application. We’ll talk about it later in the article.

An important configuration parameter is server_name. This is the address where we’ll serve our application locally (myapp.test).

Note: for local host names, use one of the following domains:





Other domains won’t work in Google Chrome.

Now let’s create a configuration for php in the ./etc/php directory. We’ll have three files here: Dockerfile,, and php.ini

Contents of the file ./etc/php/






echo 'setting write access for www-data'

setfacl -dR -m u:www-data:rwX -m u:docker:rwX var

setfacl -R -m u:www-data:rwX -m u:docker:rwX var



docker-php-entrypoint $@


Contents of the file ./etc/php/php.ini:


;PHP config

display_errors = On

display_startup_errors = On

error_reporting = E_ALL

memory_limit = 1024M

upload_max_filesize = 12M

post_max_size = 24M

date.timezone = Europe/Warsaw


Contents of the file ./etc/php/Dockerfile:


FROM php:7.2-fpm


RUN apt-get update > /dev/null && apt-get install -y \

   git \

   unzip \

   libjpeg-dev \

   libxpm-dev \

   libwebp-dev \

   libfreetype6-dev \

   libjpeg62-turbo-dev \

   libmcrypt-dev \

   libpng-dev \

   zlib1g-dev \

   libicu-dev \

   jpegoptim \

   g++ \

   libxrender1 \

   libfontconfig \

   nano \



RUN docker-php-ext-install intl > /dev/null \

   && docker-php-ext-install zip > /dev/null \

   && docker-php-ext-install bcmath > /dev/null


RUN pecl install mcrypt-1.0.2\

   docker-php-ext-enable mcrypt



# Optional Software's Installation




RUN if [ ${INSTALL_NODE} = true ]; then \

   # Install NodeJS using NVM

   curl -o- | bash > /dev/null && \

   export NVM_DIR="$HOME/.nvm" > /dev/null && \

   [ -s "$NVM_DIR/" ] > /dev/null && . "$NVM_DIR/" > /dev/null && \

   nvm install 11 && \

   nvm use node \

   nvm install node-sass; \

   npm rebuild node-sass \




RUN if [ ${INSTALL_GULP} = true ]; then \

   # Install globaly gulp

   npm install -g gulp > /dev/null \




RUN if [ ${INSTALL_BOWER} = true ]; then \

   # Install globaly bower

   npm install -g bower > /dev/null \



# Install Composer

RUN curl -sS | php -- --install-dir=/usr/local/bin --filename=composer > /dev/null



RUN if [ ${INSTALL_MYSQL} = true ]; then \

   # Install MySQL PDO

   docker-php-ext-install pdo pdo_mysql > /dev/null \




RUN if [ ${ISNTALL_POSTGRESQL} = true ]; then \

   # Install PostgreSQL PDO

   docker-php-ext-configure pgsql -with-pgsql=/usr/local/pgsql > /dev/null && \

   docker-php-ext-install pgsql pdo_pgsql > /dev/null \




RUN if [ ${INSTALL_GD} = true ]; then \

   # Install GD library

   docker-php-ext-configure gd \

       --with-freetype-dir=/usr/include/ \

       --with-jpeg-dir=/usr/include/ \

       --with-png-dir=/usr/include/ \

       --with-xpm-dir=/usr/include/ \

       --with-webp-dir=/usr/include/ > /dev/null && \

   docker-php-ext-install gd > /dev/null \




RUN if [ ${INSTALL_XDEBUG} = true ]; then \

   # Install XDebug extention PDO

   pecl install xdebug > /dev/null && \

   docker-php-ext-enable xdebug > /dev/null \




RUN if [ ${ADD_ALIASES} = true ]; then \

   # Install GD library

   echo 'alias sf="php app/console"' >> ~/.bashrc  && \

   echo 'alias sf3="php bin/console"' >> ~/.bashrc && \

   echo 'alias lv="php artisan"' >> ~/.bashrc \



WORKDIR /var/www/html

As you can see, the Dockerfile file contains instructions for installing the necessary software and optional software, the installation of which depends on the parameter values ​​in the .env file that we created earlier.

At the moment, our file structure looks like this:



├── docker-compose.yml

└── etc

   ├── nginx

   │ ├── app.conf

   │ ├── Dockerfile

   │ └── nginx.conf

   └── php

       ├── Dockerfile


       └── php.ini

Connecting the Project in Laravel

The final step of the configuration is to indicate where our project is located. We do this via a symbolic link named project (the name of the link corresponds to the APP_PATH value in .env).

ln -s ../my_great_project project

The above command creates a link to the my_great_project directory located in the parent directory. We can also use absolute paths, e.g:

ln -s /home/user/projects/my_great_project project

In the end, our configuration looks like this:

├── docker-compose.yml
├── etc
│   ├── nginx
│   │ ├── app.conf
│   │ ├── Dockerfile
│   │ └── nginx.conf
│   └── php
│       ├── Dockerfile
│       ├──
│       └── php.ini
└── project -> /home/user/projects/my_great_project/

The directory pointed to by the ./project link can be a directory with an existing project in Laravel; it can also be a new project created using the Laravel new command—remember to make it the project’s main directory. Files in this directory can be edited directly from your IDE.

Let’s Build Containers

Now we can execute the command to build our containers:

docker-compose up --build

If there are no errors in the configuration (e.g., incorrect syntax), then container building will start. This process requires downloading a lot of data from the internet and compiling the necessary tools—it can take a long time. We’ll be informed about the progress in the console in which we executed the command.

Host Settings

If the containers have been built correctly, we’re one step away from seeing our application in the browser. The last step is to assign the host name in the /etc/hosts file (Note: This is a file on the computer system, not in the Docker container).

Example: myapp.test

The host name must match with the one set in the file ./etc/nginx/app.conf, while the IP address is the web container address, which can be found with a docker inspect <container id> command. The container ID can be found by running the docker ps command. In our case, the container is named MyApplication_nginx.

To make it easier to find IP addresses of containers, we can use the following python script:



import subprocess

import json


p = subprocess.Popen(['docker', 'ps'], stdout=subprocess.PIPE)


lineNumber = 1

for line in p.stdout.readlines():

   fields = line.split()

   if lineNumber > 1:

       containerId = fields[0]

       containerName = fields[-1]


       inspect =['docker', 'inspect', containerId], stdout=subprocess.PIPE)

       data = json.loads(inspect.stdout.decode('utf-8'))

       networkMode = data[0]['HostConfig']['NetworkMode']

       print (containerName.decode('utf-8'), data[0]['NetworkSettings']['Networks'][networkMode]['IPAddress'])

   lineNumber += 1

The Configuration Is Ready

From this moment the application will be available at myapp.test.

Note: in the case of MacOS access through the hostname won’t work—you should call the address: localhost: <PORT> where <PORT> is the HTTP port set in the .env file.

If you haven’t created the project in Laravel, you can create a ./project/public/index.php file and put a test string in it, e.g., “Testing docker”. After running the application in the browser, you should see it.

If you’ve already connected the project to Laravel, you will have to adjust its configuration.

Laravel Configuration

You need to set the host address in the Laravel project .env file, in our case it will be APP_URL=http: //myapp.test

We also have to set the database according to the settings in Docker:







If we use redisa, we also set its configuration:




It is also worth setting up the email configuration in Laravel’s .env so that it points to Mailcatcher:







Access to the Container via SSH

Please note that Artisan commands have to be run in the Docker container, not in the host computer. To get to the container shell, execute the command while being in the directory with our Docker configuration.

docker-compose exec php bash

We’ll be welcomed by a prompt symbol:


The directory in which we find ourselves is the main project directory, where we can run Artisan commands, for example. From this level, we can also run composer install and grant permissions to files/directories. For sure we’ll need the following permissions:

chown www-data:www-data storage/logs/

chown www-data:www-data storage/framework/sessions/

chown www-data:www-data storage/framework/views

Note that the user in the container’s SSH session is root, while the user running the web server is www-data. It can happen that we’ll execute the Artisan command, which will create a log file, and then this file will have root privileges preventing the web server from saving to it. To avoid this, open the config/logging.php file in the Laravel application and in the channels -> daily section set:

'path' => storage_path('logs/' . php_sapi_name() . '/laravel.log'),

Thanks to this, .log files will be saved in separate directories for the console and for the server:


|-- cli

|   `-- laravel-2019-05-09.log

`-- fpm-fcgi

   `-- laravel-2019-05-09.log

Instead of changing the daily channel, you can also create a separate login channel and set it in your .env to change how you only log in to your development instance.


At this point, we can add an entry to the Cron. From the container shell level, run the command:

crontab -e

And at the end of the file, we add a line:

* * * * * cd /var/www/html && php artisan schedule:run >> /dev/null 2>&1


If we want to have access to our application via the HTTPS protocol, we need to uncomment all commented lines in the ./etc/nginx/app.conf file, and then restart the server. To restart the server, log into the web container:

docker-compose exec web bash

Run the command:

service nginx restart

If we look into the container log, we’ll find errors there:

No such file or directory:fopen('/etc/ssl/server.pem','r')

We must therefore create an SSL certificate. To do this, run the command from your system (not in a container) in the Docker configuration directory:

sudo docker run --rm -v $(pwd)/etc/ssl:/certificates -e "SERVER=myapp.test" jacoelho/generate-certificate

The myapp.test value must match the hostname from the ./etc/nginx/app.conf file.

The command will generate the necessary certificates in the ./etc/ssl directory:


├── cacert.pem

├── server.key

└── server.pem

The application will work at . Of course, the browser will report an invalid certificate, but once it’s accepted, the site will work.


Our Mailcatcher is configured. To launch its panel, open the http://localhost:1080 in the browser (the port corresponds to the PORT_MAILCATCHER value from the Docker .env file). All emails sent from the application will be displayed in this panel. This is very useful if you want to test the mailing of emails locally, without releasing any emails into the world.

Accessing the Application from the Local Network

We want to see how the application works on a mobile phone or another device on the local network. To enable this, we’ll put a proxy on the computer using the Squid tool.

  1. Install Squid: sudo apt install squid
  2. Configure Squid in the /etc/squid/squid.conf file (I recommend you make a copy of the original file)

– we comment on the http_access deny to_localhost line

– at the end of the file set the IP of our network: acl dom src

– at the end of the file we also change the http_access deny all to http_access allow all

My configuration (only the last lines of the file) looks like this:

visible_hostname weezy # this line doesn’t matter

http_port 8888 # on this port the proxy will work

hosts_file /etc/hosts # from this file the list of available hosts will be loaded

acl dom src # the address of our network

http_access allow all # allowing any access

  1. Restart Squid: service squid restart
  2. On the phone in the wifi network settings, we set the proxy: <computer_address>: 8888, in my case it is
  3. Enter the host address in the browser on the phone from the /etc/hosts file, e.g., project.test and here we go, you can see the app.

Attention! The above Squid configuration is only suitable for development purposes, using it in production is dangerous.

Using Docker in development greatly speeds up development time and takes out a lot of hassle from the first stages of development, which is especially important when working on multiple projects.

You might be interested in:

Let’s Talk About Your Project!

Have an exciting project in mind? Or maybe would like to improve your current setup?
We’d be happy to discuss it with you. Let’s get in touch!


Our Privacy Policy has been updated in line with the new General Data Protection Regulation(GDPR)