Edit content locally and deploy WordPress with Docker Compose

In the previous post Getting started with Docker Compose and WordPress we learned how to set up a WordPress website with Docker Compose. In this post, we’re going to populate that site with our content, and figure out how to deploy WordPress using Docker Compose. For the deployment target, we are going to use an AWS EC2 instance running Docker, You can learn how to configure that instance here.

Introduction to WordPress

If you are coming here from the previous post then you should have installed WordPress and you can log in to the dashboard here: http://localhost:8000/wp-admin/:WordPress Dashboard

This is the “back-end” of your WP website, click on “My Site” in the upper-left corner to see the front-end:WordPress front-end

WordPress is a Content Management System, very simply this means that it has a UI for adding and editing content, and templates for displaying that content to front-end users. You can customize WordPress in many ways with the tools accessible from the admin panel, and you can customize the code and add your own code to customize your website even further.

Step 1

Setting Permalinks

One of the first things you want to do for almost any WP project is set up permalinks. Go back to the dashboard and then click Settings -> Permalinks.

Setting Permalinks

Permalinks are just the “permanent links” or URLs for your posts, pages, and other content. The default style for links use the WP post ids, which might make sense to WordPress, but it’s not very human-friendly, nor is it good for SEO.

You should change the Permalink “Common Settings” to Post name and click the save button.

Step 2

Customize Appearance Settings

Next, go to Appearance -> Customize, the customize screen lets you change certain settings and see the effect it would have on the front-end. WordPress themes can add additional options with the Customize API. We are using a theme called Twenty Seventeen which is one of the default themes that ships with WordPress. This theme has a cool kind of single page site layout that displays content from several pages on the front page of the website with an interesting scrolling effect. We are going to set this up now.

First, go to the Homepage Settings and make sure that the checkbox for “A static page” is selected, the Homepage should be Home and the Posts page should be Blog, next click Publish.  You can visit the front-end now in a new tab at http://localhost:8000, WordPress adds some dummy pages to demonstrate the settings.

Now go back to the Customize tab and click Theme Options, This is where you can control the ordering of the pages that make up the homepage settings. We are going to change the page sections here, but first, we need to make some new pages. Before we do that though, go back to the Customize page and open the colors menu, select the Dark Color Scheme and click Publish.

Step 3

Add Pages

Go back to the WordPress dashboard and then navigate to Pages -> All Pages. Here we can see the pages that are showing up on our homepage, click on Add New next to the Pages title.

We need some featured images for our pages so download these free images that I found on the website pexels.com:

images1.zip

On the Add New Page screen Give the page a title of Planes and set the featured image (lower right-hand corner) to the planes.jpg image. You need to click Upload Files then select the file from your hard drive and click set featured image. Finally, click Publish.

Now click Add New to create another page. Give this page a Title of Trains  and set the featured image to train.jpg and then publish the page.

Publish another page and title it Automobiles using the automobiles.jpg image.

Step 4

Setting the homepage sections

Now we need to go back to Appearance -> Customize and go to Homepage Settings, change the Homepage from Home to Planes and click publish. Then, go to the Theme Options section. Change the setting for Front Page Section 1 Content to TrainsNow change Section 2 to AutomobilesSet the option for Sections 3 and 4 to — Select — to set it to no page. Next go to the Header Media options in the Customize menu, find the settings for Current Header, and click Add new image, upload the header.jpg image from the images that you downloaded. Now click publish and view the front-end of your site.

Step 5

Updating the menu

At this point things should be looking pretty good, except that will still have links for Home, Contact, Blog, etc. We can fix this now by going to the dashboard and then navigating to Appearance -> Menus. Next, click on create a new menu. Give the menu a name transportation and click Create Menu. Next, add the links for the pages Planes, Trains, and Automobiles, then use the drag and drop editor under Menu Structure to sort the links in that order. Where it says Display Location: choose Top Menu and click Save Menu.

If we test out the front-end homepage now things are looking pretty good:

Front-end of finished site

If this were going to be a real website we would probably want to add some more meaningful content and features, but it’s not so next we are going to learn how to deploy WordPress with a copy of our websites code, images, and database using docker-compose.

Step 6

Preparing to deploy the site

Now that we have a website it is time to unleash it on the world. Before we do that we need to consider what parts of the site we need to move, and how to do it. We also need to consider how our hosting architecture will be different from the local development version. We are going to need to move our database which has our pages and customization settings, and we also need to move our uploads directory which includes the image files. The rest of the site files can be regenerated by the Docker image for WordPress. Also, while it was convenient for local development to have the database in a docker container alongside our app, in a production environment it would make sense to use a dedicated database host such as AWS RDS. So, we will need to modify our docker-compose file to only start a WordPress container, that will have environment variables that specify an RDS database to connect to. You’ll also need a way to transfer your local database to RDS.

We can create a script that will extract the local database, update the site URLs for the hosting address and then upload the resulting .sql file into a database in RDS. To launch the site we will use $ docker compose up -d, and to transfer the images and deployment files to our EC2 instance we will use Gitlab to set up a remote repository to pull from.

Step 7

Setting up the database

You can recall from setting up the docker-compose.yml that the WordPress container depends on MySQL, so this means that we should set up the RDS database before we try to launch our containers on EC2.

Head over to your AWS console and find the RDS dashboard. You want to launch a new DB instance and select the MySQL instance type, then select Dev/Test MySQL.

RDS Options

Click Next, on the DB details page select the instance type db.t2.micro, and that it is using General Purpose (SSD) storage with 20GB of Allocated Storage.

In the Settings section name the Instance Identifier wordpress and the master username root enter a password and write it down so you don’t forget, I just used password, since I’m just going to destroy this instance later in the tutorial.

DB Details

Click Next to configure the Advanced Settings, on this page the Network and Security settings should be ok with the defaults. Set the Database name to wordpress, and the Backup retention period to 0 days. We don’t want to export any logs so click Launch DB Instance.

Advanced Settings

Step 8

Migrating the local database

You have lots of options for this, you could use WordPress plugins or the built-in import/export tools. But instead, we will write a bash script to export the database and import it to RDS for us. I like to use a script for this because they are portable and easy to use, and it automates part of the process reducing the probability of human errors. Also, we can leverage this script later when setting up an automated CI/CD pipeline.

A bash script is basically just a series of commands that execute in order, as though you entered them manually in the terminal. To write a script we just need to walk through the steps of what it needs to do and identify what information the script will need and how to supply those variables.
Now we need to outline what the script needs to do and start adding the commands to our script to make that happen:

  1. export the database content from the MySQL docker container.
  2. Rename any links that have ‘localhost:8000’ to the new environment’s URL.
  3. import the modified db content into the RDS database.

To start the db_migrate.sh we will add the “she-bang” to specify the bash shell as the interpreter for this script. Next we’ll add a docker exec command to generate the database export file, then we’ll see what variables we need.

#! /bin/bash

docker exec "$CONTAINER_NAME" /usr/bin/mysqldump -u root --password="$ROOT_PW" "$DATABASE" > "$DUMPFILE"

I’ve identified four variables that we need to supply the command, we can derive all of these values from the docker-compose.yml file, which allows us to reuse this script and compose file for other projects.

For the $CONTAINER_NAME variable, we need to match the value that is shown when we execute docker ps “wpsite_db_1”, we can see that docker-compose takes the directory name, strips out any hyphens and then adds “db_1”, to generate the container name, so we can do the same thing. For the database name and root password we can use the utility sed to extract the values from the docker-compose.yml file.

Add in the shell variables above the docker exec... line:

#! /bin/bash

DIRECTORY="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd | rev | cut -d '/' -f1 | rev )"
CONTAINER_NAME="${DIRECTORY//-/}_db_1"
DATABASE=$( echo $( cat "./docker-compose.yml" | sed -n -e 's/^.*MYSQL_DATABASE: //p' ) )
ROOTPW=$( echo $( cat "./docker-compose.yml" | sed -n -e 's/^.*MYSQL_ROOT_PASSWORD: //p' ) )
DUMPFILE="${DATABASE}_$( date +%Y-%m-%d-%Hh%Mm%Ss ).sql"

echo $CONTAINER_NAME
echo $DATABASE
echo $ROOTPW
echo $DUMPFILE

docker exec "$CONTAINER_NAME" /usr/bin/mysqldump -u root --password="$ROOT_PW" "$DATABASE" > "$DUMPFILE"

Now when we run the script we can check all of the variables that it outputs:

You can also look for the generated sql file and check it’s contents, since everything looks good we can remove the echo statements and proceed to rename the URLs in our .sql file. For this we can use a utility called sed:

#! /bin/bash

DIRECTORY="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd | rev | cut -d '/' -f1 | rev )"
CONTAINER_NAME="${DIRECTORY//-/}_db_1"
DATABASE=$( echo $( cat "./docker-compose.yml" | sed -n -e 's/^.*MYSQL_DATABASE: //p' ) )
ROOT_PW=$( echo $( cat "./docker-compose.yml" | sed -n -e 's/^.*MYSQL_ROOT_PASSWORD: //p' ) )
DUMP_FILE="${DATABASE}_$( date +%Y-%m-%d-%Hh%Mm%Ss ).sql"

docker exec "$CONTAINER_NAME" /usr/bin/mysqldump -u root --password="$ROOT_PW" "$DATABASE" > "$DUMP_FILE"

sed -i "" "s/${OLD_URL}/${NEW_URL}/g" "./${DUMP_FILE}"

You can see that we added two new variables here, and this is where things get a little bit tricky because we can’t really know what these values should be ahead of time. As a project matures you’ll probably want to determine a naming convention for your DNS addresses and store this information in environment configuration files, for now though, we will prompt the user to supply the URLs. We are going to place these prompts at the top of the script so that we can abort quickly if the user does not supply adequate input values:

#! /bin/bash

echo -n "Old URL (localhost:8000): "
read OLD_URL
OLD_URL=${OLD_URL:="localhost:8000"}
echo -n "New URL: "
read NEW_URL;

if [ -z "$NEW_URL" ]
then
      echo "You must set a New URL."
      exit 1
fi

DIRECTORY="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd | rev | cut -d '/' -f1 | rev )"
CONTAINER_NAME="${DIRECTORY//-/}_db_1"
DATABASE=$( echo $( cat "./docker-compose.yml" | sed -n -e 's/^.*MYSQL_DATABASE: //p' ) )
ROOT_PW=$( echo $( cat "./docker-compose.yml" | sed -n -e 's/^.*MYSQL_ROOT_PASSWORD: //p' ) )
DUMP_FILE="${DATABASE}_$( date +%Y-%m-%d-%Hh%Mm%Ss ).sql"

docker exec "$CONTAINER_NAME" /usr/bin/mysqldump -u root --password="$ROOT_PW" "$DATABASE" > "$DUMP_FILE"

sed -i "" "s/${OLD_URL}/${NEW_URL}/g" "./${DUMP_FILE}"

It is safe to assume that the old url will be “localhost:8000” so we can provide that as the default parameter, the new url has no default parameter, so we have a condition to see if the user has provided a value and bail if they haven’t.

Now execute the script again, you can leave the “Old URL” option empty, and enter “test.com” for the new URL, then we can check in the SQL export file to see if the ‘siteurl’ has been updated.

Testing RenamingNow you can use grep find the “test.com” string in the SQL file, you use grep by entering the string you want to search for followed by a path to search in, we will use the -o option to only output the matches and then pipe | this to wc to get a word count of the results:

grep-ing for new url

This tells us that the string “test.com” appears 55 times in the wordpress_…. sql file, so we can rest assured that the links are being updated.

All that is left is to import the SQL into the RDS database:

#! /bin/bash

echo -n "Old URL (localhost:8000): "
read OLD_URL
OLD_URL=${OLD_URL:="localhost:8000"}
echo -n "New URL: "
read NEW_URL;

if [ -z "$NEW_URL" ]
then
      echo "You must set a New URL."
      exit 1
fi

DIRECTORY="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd | rev | cut -d '/' -f1 | rev )"
CONTAINER_NAME="${DIRECTORY//-/}_db_1"
DATABASE=$( echo $( cat "./docker-compose.yml" | sed -n -e 's/^.*MYSQL_DATABASE: //p' ) )
ROOT_PW=$( echo $( cat "./docker-compose.yml" | sed -n -e 's/^.*MYSQL_ROOT_PASSWORD: //p' ) )
DUMP_FILE="${DATABASE}_$( date +%Y-%m-%d-%Hh%Mm%Ss ).sql"

docker exec "$CONTAINER_NAME" /usr/bin/mysqldump -u root --password="$ROOT_PW" "$DATABASE" > "$DUMP_FILE"

sed -i "" "s/${OLD_URL}/${NEW_URL}/g" "./${DUMP_FILE}"

mysql -u root \
    --port=3306 \
    --host="${RDS_ENDPOINT}" \
    "-p${RDS_ROOT_PW}" \
    --database="${DATABASE}" < "./${DUMP_FILE}";

exit 0

This section of the script calls for two more variables that we haven’t set previously, the $RDS_HOST_ENDPOINT and $RDS_ROOT_PW.You need to prompt the user for these as well, and like the $NEW_URL we should exit the script with an error code if values are not provided.

#! /bin/bash
echo -n "Old URL (localhost:8000): "
read OLD_URL
OLD_URL=${OLD_URL:="localhost:8000"}
echo -n "New URL: "
read NEW_URL;
echo -n "RDS Endpoint: "
read RDS_ENDPOINT;
echo -n "RDS Root Password: "
read RDS_ROOT_PW;

if [ -z "$NEW_URL" ]
then
      echo "You must set a New URL."
      exit 1
elif [ -z "$RDS_ENDPOINT" ]
then
      echo "You must provide an RDS Endpoint."
      exit 1
elif [ -z "$RDS_ROOT_PW" ]
then
     echo "You must provide a RDS Root Password."
     exit 1
fi

DIRECTORY="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd | rev | cut -d '/' -f1 | rev )"
CONTAINER_NAME="${DIRECTORY//-/}_db_1"
DATABASE=$( echo $( cat "./docker-compose.yml" | sed -n -e 's/^.*MYSQL_DATABASE: //p' ) )
ROOT_PW=$( echo $( cat "./docker-compose.yml" | sed -n -e 's/^.*MYSQL_ROOT_PASSWORD: //p' ) )
DUMP_FILE="${DATABASE}_$( date +%Y-%m-%d-%Hh%Mm%Ss ).sql"

docker exec "$CONTAINER_NAME" /usr/bin/mysqldump -u root --password="$ROOT_PW" "$DATABASE" > "$DUMP_FILE"

sed -i "" "s/${OLD_URL}/${NEW_URL}/g" "./${DUMP_FILE}"

mysql -u root \
    --port=3306 \
    --host="${RDS_ENDPOINT}" \
    "-p${RDS_ROOT_PW}" \
    --database="${DATABASE}" < "./${DUMP_FILE}";

exit 0

Ok, now we are going to run the completed script, and hopefully, migrate the database. First go to your AWS console we need to obtain the RDS Endpoint and the new url, which is the public dns for the EC2 instance.

RDS Endpoint

new url

Now execute the db_migrate.sh script and enter in all of the details, it will probably take a little bit longer to run this time since it is updating a database over the network. Now that the database is all squared away we can turn our attention to deploying our WordPress Docker container.

Step 9

Configure Docker Compose for AWS

As was mentioned previously, we need to create a separate docker-compose file for the EC2 environment.

Create a new file called aws-compose.yml

version: '3'

services:
   wordpress:
     image: wordpress:latest
     ports:
       - "80:80"
     volumes:
       - "./wp:/var/www/html"
       - "./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini"
     restart: always
     environment:
       WORDPRESS_DB_NAME: wordpress
       WORDPRESS_DB_HOST: wordpress.chidrucdqowj.us-east-1.rds.amazonaws.com:3306
       WORDPRESS_DB_USER: root
       WORDPRESS_DB_PASSWORD: password

Make sure the WORDPRES_DB_HOST matches your RDS Endpoint.

Now we need to set up a git repo to push the code to a central repository that the EC2 instance can pull from, execute the git init command in the root directory of the project.

$ git init

Before we commit the files we need to create a .gitignore file to exclude the WordPress core files from the repository since we want to install a fresh copy of these when we launch the container.

wp-data/*
!wp-data/wp-content

wp-data/wp-content/*
!wp-data/wp-content/uploads

This might look a little strange, but basically, we need to tell git to ignore the entire WordPress container volume directory, and then un-ignore the uploads directory with ! now we can commit the files:

$ git add .
$ git commit -m "initial commit."
Step 10

Push the repository

Now you need to push the repository to a publicly accessible git server. We like to use gitlab.com, You can sign up for a free account on gitlab.com and store unlimited public and private repositories, plus there is an integrated docker container registry and other DevOps and project management tools to keep your projects running smoothly when working in a team.

Sign up for an account on gitlab now, and create a new public project called wp-site.Create repo on Gitlab

Then follow the instructions for adding an Existing git repository

$ cd existing_repo
$ git remote add origin https://gitlab.com/[username]/wp-site.git
$ git push -u origin --all
$ git push -u origin --tags
Step 11

Deploy WordPress repo to EC2 Docker instance

Now we can SSH into our EC2 instance, to clone the repository and start up the WordPress service with the aws-compose.yml file.

Navigate to the EC2 Dashboard, to get the connection information, then connect to your instance with ssh:

$ ssh -i "your-key.pem" ec2-user@ec2-"your instance ip".compute-1.amazonaws.com
$ git clone https://gitlab.com/jplack/wp-site.git
$ docker-compose -f aws-compose.yml up -d

Now you can navigate to eh EC2 public dns address in the browser and see our site deployed on AWS

From here you can register a domain name for your site and continue to build it out, or add more automation to the deployment process, perhaps using the AWS CLI to fill in some of the missing variables, creating more scripts and employing gitlab runners to automatically deploy when a new push is made to the repository.

Leave a Reply

Your email address will not be published. Required fields are marked *