Your first php application inside a Docker container in 15 minutes (Part 3)

How it comes together

In Parts 1 and 2 of this blog post, we talked bout docker containers, and actually created an Ubuntu 18 based Lamp stack insider a container. In this third and final post, we will look at how it all came together, without going into too much detail. There are plenty of other write-ups out there which you can refer to, if you are looking at how specific commands work. As already mentioned, the scope of the blog post is to merely scratch the surface of what docker is and what it can do.

If you are only interested in dockerising one of your existing php projects, without delving too much into the internals of the technology, all you have to is :

  1. Dump the SQL Database of your app, and save it as “db_dump.sql” in the sql_dump folder
  2. Copy the php / html files including all sub folders to the “public_html” folder
  3. Update the Credentials in docker/Dockerfile (lines 5 to 10)
  4. Change the php version required (docker/Dockerfile line 7)
  5. Add / Remove any php modules needed / not needed by your app
  6. Run docker-compose up –build
  7. Test / Debug the app (http://localhost:2080 and ssh [email protected] -p 2022)

On the other hand however, I urge you to modify the Dockerfile, and break things in order to better understand how everything works. I will go through the process as briefly as possible below to get you started.

The “Docker” file

I will ignore references to supervisord in the dockerfile. As you might (or might not) already know, supervisord is used by Ubuntu and other linux Distros as a Process Control Manager. You may read more about it here but it is beyond the scope of this writeup to delve into how it works.

The Docker file contains “instructions” for the docker composer which will create our environment. The first line :

From ubuntu:18.04

Sets the base image of our container. This must be the very first instruction inside the dockerfile. You can use docker search <search term> to look for available images. For example you can use docker search ubuntu to search for available images which contain the work ubuntu in their names.

The Maintainer directive is there for information purposes only.

USER root

User root tells the composer which username to use

 ARG sshpass="123456"
ENV php_version="7.1"
ENV sql_db_name="johannfe_test_db"
ENV sql_db_user="johannfe_test_user"
ENV sql_db_pass="johannfe_db_pass"

The above are variables, which will be used later on in the script,

sshpass will be used to setup the root ssh password

php_version will be used by the apt-get command to install the necessary php modules.

sql_db_xxxxx will be used to setup the mysql database, username and password.

A short note about ARG and ENV. In a nutshell the main difference between ARG and ENV is that ARG variables are only available during the image build phase, while ENV variables are still available after the container itself is launched. There are instances where both will work, but you should choose the right keyword depending on the scope in which you need them to work.

The WORKDIR directive sets the base directory for the commands to come

RUN will execute commands inside the container itself, so directives like apt-get etc belong to the environment chosen in the “FROM” directive on the first line. As you can see we use apt-get a lot to install stuff just like we would on a machine running ubuntu 18.

DEBIAN_FRONTEND=noninteractive is used to avoid the commands stopping and waiting for user interaction. This is also an Ubuntu command and has nothing to do with docker as such

Setup SSH Access and allow root to login

RUN mkdir /var/run/sshd
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN echo "root:$sshpass" | chpasswd
RUN service ssh restart

This is exactly what we would to to permit root login via ssh on an ubuntu machine.Essentially we are using sed to change PermitRootLogin prohibit-password to PermitRootLogin yes in /etc/ssh/sshd_config

COPY ./docker/configs/supervisord/supervisord.conf /etc/supervisor/conf.d/supervisord.conf

Does exactly what you think it does. It is basically copying the config file we have under docker/configs/supervisord to inside the container at /etc/supervisor/conf.d/supervisord.conf

COPY public_html /var/www/html/public_html

Copies all the files and sub directories inside public_html into /var/www/html/public_html inside the container. Note that if you change this path you will need to update the docker/configs/apache/000-default.conf just like you would on an ubuntu server.

The /configs/apache/000-default.conf file is copied to /etc/apache2/sites-available/ inside the container

EXPOSE 22 80 3306 

The expose command tells docker to expose the ports, but without publishing them to the host machine. Since Ports 22, 80, and 3306 are likey to be already in use, we will NAT them via 2022, 2080, and 2036 . This is done inside docker-compose.yml

         ports:
          - "2080:80" # Both 2080 and 8000 port are using apache
          - "2022:22"
          - "2036:3306"

Docker Volumes

Volumes are used to make data persistent. Docker containers can be destroyed and rebuilt, however, imaging for example destroying a container which contained the MySQL database, and loosing all the data.

In order to avoid this, the VOLUME directive instructs the composer to create “links” for certain paths to a path outside the container, on the host machine. That way, if the container needs to be rebuilt, say because you wish to upgrade a particular module, change something in the config, or build the whole thing using a different distro for testing, the data will not be lost.

The syntax is quite straightforward :

VOLUME ["/var/lib/mysql", "/var/log/mysql", "/var/log/apache2"

As per the line above, /var/lib/mysql, /var/log/mysql, /var/log/apache2 will be links to the outside world, and the data inside them will live on the host machine in directories configured inside docker-compose.yml

         volumes:
          - ./app:/app
          - ./mysql:/var/lib/mysql
          - ./var_log:/var/log

You will also notice, that after you run docker-compose up –build for the first time, two new directories (mysql and var_log) will appear in the docker directory. These will contain the mysql database files, and the respective log files.

Import SQL Data

The final part of the docker files deals with creating a script file inside /tmp to import the sql dump into the database. The script is generated based on the variables set in the first few lines on the dockerfile, and is later called from docker/app/docker_start.sh line 27. Immediately after execution this file is destroyed in line 28. If you wish to debug issues with the import, comment out line 28, and you can run the import sequence manually to see why it is failing.

Pay special attention to passwords containing characters which would require escaping like ” and ‘ etc.

The public_html directory

The public_html directory contains a very simple app to demonstrate a connection to a mysql database, and display information about the php modules installed. It is ugly, yes, but it serves the purpose. Once you get the container up and running for the first time, just put in any running app you might have already, or just write one to test and experiment further.

Conclusions

As mentioned in the very beginning of this post. The idea behind this was to show you in 15 minutes or less what docker can do. We only barely scratched the surface, but if you, like me, like a hands on approach to start learning new technologies, I believe that this should get you well on your way. The next steps would be to learn how to use separate containers for the different modules, starting with separating the MySQL Database from the rest, and later using nginx or apache as reverse proxy to different containers and ports depending on the host name in the url.

Hope you found this post useful, and please do let me know if you notice anything wrong or not explained well in the post.

Happy Dockering !!

Leave a reply:

Your email address will not be published.