WordPress deployment workflow to Google Container Engine using Circle-CI, Docker hub and Kubernetes

We would like a github repository to include deployment configuration of a WordPress blog. We would like to run an instance of this deployment on a Google Container Engine / Kubernetes. In fact, we would like to run two instances, a pre-prod instance matching the master branch of the deployment configuration. We would like the operated deployment instances to automatically update when we commit to the configuration repository.

Running WordPress Docker images

We’ve prepared a docker image template with several improvements compared to the official wordpress container image.

We make heavily use of wpackagist for installing both WordPress core and wordpress plugins using composer.

Here is a snapshot of typical etc/composer.json configuration to use with Docker:

    "name": "rnd.feide.no",
    "description": "This is a technology blog at UNINETT AS",
    "version": "1.0.0",
    "type": "root",
    "keywords": [],
    "homepage": "http://rnd.feide.no",
    "minimum-stability": "stable",
    "repositories": [{
        "type": "composer",
        "url": "https://wpackagist.org"
    "require": {
        "composer/installers": "~1.0",
        "johnpbloch/wordpress": "~4.5.2",
        "johnpbloch/wordpress-core-installer": "~0.2",
        "wpackagist-plugin/dataporten-oauth": "~2.0",
        "wpackagist-plugin/disable-comments": "~1.5",
        "wpackagist-plugin/w3-total-cache": "~",
        "wpackagist-plugin/syntaxhighlighter": "~3.2.1",
        "wpackagist-plugin/wp-stateless": "~1.9.0"
    "extra": {
        "wordpress-install-dir": "wordpress",
        "installer-paths": {
            "plugins/{$name}": ["type:wordpress-plugin"]

Important plugins installed is W3 Total Cache to improve performance. Notice that the w3tc folder with configuration is also including the the config repo. This means that you will not be able to use the plugin settings UI to reconfigure, but instead would need to edit this config files. Also notice that we use local disk caching, where the cache will not shared between replicated wordpress nodes. Because of this it is more effective to configure the load balancer to bind clients to one of the replicated instances to improve cache hits, and hence performance. It is also worth exploring whether it would be possible in a simple way to use a shared cache store, such as Redis, a memcache cluster, S3 or similar. Also one may argue that there is reasons to let wordpress container to run a single instance.

To avoid file uploads to use local disk, one should configure some kind of S3-ish storage plugin. Because we would like to use Kubernetes on Google Container Engine, Google Storage will be the appropriate storage for uploads. We found the WP Stateless plugin to be our best choice. The plugin can be configured by a combination of environment variables and values stored to database when using the plugin embedded settings-UI.

For authentication using Dataporten we’ve developed a WordPress authentication plugin.

Deploying WordPress to Kubernetes, Google Container Engine (GKE)

We’ve included the configuration files for deploying WordPress to Google Container Engine (GKE) or any kubernetes cluster.

kubectl create --namespace production -f etc-kube/secrets.yaml
kubectl create --namespace production -f etc-kube/deployment.json
kubectl create --namespace production -f etc-kube/service.json
kubectl create --namespace production -f etc-kube/ingress-ssl.yaml   # this is the SSL secret object exposed to the Ingress
kubectl create --namespace production -f etc-kube/ingress.yaml

We are also creating a duplicate environment for preprod testing:

kubectl create --namespace production -f etc-kube/secrets-testing.yaml
kubectl create --namespace production -f etc-kube/deployment-testing.json
kubectl create --namespace production -f etc-kube/service-testing.json
kubectl create --namespace production -f etc-kube/ingress-testing.yaml

Notice that we setup a separate Ingress for the proprod-environment. This is important because we would like to connect to the preprod enviornment using the same hostname. We setup the preprod environment using the same certificate and hostname, it only differs by the IPv4 address that we configure in our etc/hosts file when testing.

The secrets above is not included in the deployment repo, but contain these values:

dbhost: ...
dbuser: ...
dbpassword: ...
dbname: ...
dataporten-clientid: ...
dataporten-clientsecret: ...
dataporten-scopes: ...
dataporten-rolesets: ...
dataporten-default-role-enabled: ...
auth-key: ...
secure-auth-key: ...
logged-in-key: ...
nonce-key: ...
auth-salt: ...
secure-auth-salt: ...
logged-in-salt: ...
nonce-salt: ...
gcserviceaccount.json: ...
stateless-media-bucket: ...
stateless-media-account: ...

To keep friends with the GKE Ingress, we would like to avoid getting into troubles by taking more than 1 second to respond on GET /. The GKE Ingress will setup a Health Check that checks twice a second against / and it MUST get 200 OK everytime within 1000 ms. WordPress cannot (always) deliver this causing big problems. We added this in the wp-config.php:

  if (isset($headers["user-agent"]) && $headers["user-agent"] === 'GoogleHC/1.0') {
    echo 'OK';

We are also running a HTTPS-only blog, meaning we would like to automatically redirect requests with the wrong value of x-forwarded-proto. We do this:

if (getenv('TLS') === 'true' && isset($headers["x-forwarded-proto"]) && $headers["x-forwarded-proto"] === 'http') {
    $redirect = 'https://' . getenv('HOST') . $_SERVER['REQUEST_URI'];
    header('HTTP/1.1 301 Moved Permanently');
    header('Location: ' . $redirect);
if (getenv('TLS') === 'true') {

We have a replicated instance of the database that we use for preprod/testing. This is reflected in secrets-testing.yaml. In the automation we replicate the database whenever we build a new image for the preprod instance. Later we will see how to automatically updates these two deployments from two separate git branches of the deployment configuration.

Automated deployment

The approach we would like to use to apply updates, it as follows:

We want to be able to test changes locally. We build a local image and run it as follows:

docker build -t uninettno/feidernd:testing .
docker run -p 8080:80 --env-file etc/ENV uninettno/feidernd:testing

For faster development we mount the directories that we would like to massively update and test. We use a local ENV file pointing to the testing instance of the database.

Some tweaking neccessary when running a HTTPS only site. Typically use a self-signed cert and respond to 443, and configure your browser to accept it.

When we are confident that the configuration makes sense we commit an update to the master branch of the deployment config repository.

Each commit triggers running automated deployment. We found Circle-CI to be the most intuitive for this specific task, but you could do the same with Travis-CI or jenkins.

We also considered using Docker Hub for automated builds, but did not succeed in completing the setup, as there was no simple way of triggering the script needed to update the kubernetes deployment from Docker Hub.

We did indeed use Docker Hub for storing the images that we built on Circle-CI.

Skjermbilde 2016-08-08 kl. 12.45.49

Summarized the circle.yml-file, does setup some enviornment variables, then configures trust and the CLI tools.

Next, images it built, and uploaded to Docker Hub. We tag it with build-${CIRCLE_BUILD_NUM}, and we keep information in the built image about whether it comes from the master or stable branch.

We do a simple test of running the container succeeds, but no sophisticated testing of the built container beyond that.

After image is updated, we do a kubectl patch deployment depending on whether we run the master or stable branch. The section looks like this:

      branch: master
        - echo "Deployment [MASTER / Preprod] ${IMAGE}"
        - kubectl patch deployment ${KUBERNETES_DEPLOYMENT_TESTING} -p "{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"${KUBERNETES_DEPLOYMENT_TESTING}\",\"image\":\"${IMAGE}\"}]}}}}"
        - ./bin/migratedb.sh
      branch: stable
        - echo "Deployment [STABLE / Production] ${IMAGE}"
        - kubectl patch deployment ${KUBERNETES_DEPLOYMENT} -p "{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"${KUBERNETES_DEPLOYMENT}\",\"image\":\"${IMAGE}\"}]}}}}"

Notice that when we update the testing version from the master branch we run the migrate database script, which looks like this:

export DBHOST=`kubectl --namespace production get secrets feidernd  -o 'go-template={{index .data "dbhost"}}' | base64 --decode`
export DBNAME=`kubectl --namespace production get secrets feidernd  -o 'go-template={{index .data "dbname"}}' | base64 --decode`
export DBUSER=`kubectl --namespace production get secrets feidernd  -o 'go-template={{index .data "dbuser"}}' | base64 --decode`
export DBPASSWORD=`kubectl --namespace production get secrets feidernd  -o 'go-template={{index .data "dbpassword"}}' | base64 --decode`
export TESTING_DBHOST=`kubectl --namespace production get secrets feidernd-testing  -o 'go-template={{index .data "dbhost"}}' | base64 --decode`
export TESTING_DBNAME=`kubectl --namespace production get secrets feidernd-testing  -o 'go-template={{index .data "dbname"}}' | base64 --decode`
export TESTING_DBUSER=`kubectl --namespace production get secrets feidernd-testing  -o 'go-template={{index .data "dbuser"}}' | base64 --decode`
export TESTING_DBPASSWORD=`kubectl --namespace production get secrets feidernd-testing  -o 'go-template={{index .data "dbpassword"}}' | base64 --decode`

echo "mysqldump -u ${DBUSER} --password="xxx" -h ${DBHOST} ${DBNAME}"
echo "mysql -u ${TESTING_DBUSER} --password="xxx" -h ${TESTING_DBHOST} ${TESTING_DBNAME}"

mysqldump -u ${DBUSER} --password="${DBPASSWORD}" -h ${DBHOST} ${DBNAME} | mysql -u ${TESTING_DBUSER} --password="${TESTING_DBPASSWORD}" -h ${TESTING_DBHOST} ${TESTING_DBNAME}

Remaining work

I’m not satisfied with the current performance. It might be related to distant database, currently running inhouse. It might be disk IO, CPU or memory. I’ll be happy to get tips about this, and I’ll try to update this article with more details on performance after investigation.

Credits and copyright

This article and work is an joint effort between Andreas Åkre Solberg and Kasper Rynning-Tønnesen.

All text, documentation and configuration provided by us is licenced as public domain, except from themes and existing software which contains other licences.

Leave a Reply