Illustration of Docker contatianers with computer stack in the background
July 30, 2021

PHP Orchestration With ZendPHP Docker Images

PHP Development

In a recent webinar, we've detailed various ways to perform orchestration, which is detailed as:

The process of describing the resources that make up a computing system, and how they interact.

Orchestration helps you deploy both simple and complex application environments, ensuring that all systems required to allow the application to function fully are present. In this post, we'll do a deep dive into container orchestration with Docker and related technologies, and provide you with resources on how you can start orchestrating your applications using ZendPHP.

Try ZendPHP Free 

Back to top

Containers and Docker

Containers are a long-time feature of most operating systems. Generally speaking, however, we think of containers with regards to the Linux kernel, and most approaches to containerization build on Linux kernels. Linux kernels provide a feature to allow processes to be organized into a hierarchical group that monitors and controls access to and usage of resources; these are called "control groups."

On Linux, a container is a combination of a filesystem; CPU, memory, and other system limits; and a control group. Users can actually create them using a variety of system tools and libraries, but the process is arcane and requires a lot of expertise.

Docker exists to simplify container creation and management. With Docker, you specify a service you wish to create, expose, or consume, via a Dockerfile, build the image, and run it. Each directive in the Dockerfile creates a new "layer", which the build process can cache, providing optimizations during both build and run stages when the layers have already been built and cached on the system.

An example Dockerfile:


FROM php:7.4-apache

RUN set -e; \
  apt-get update; \
  apt-get install -y \
    git \
    libonig-dev \
    libzip-dev \
    zlib1g-dev \
    zip unzip \
    build-essential \
    locales \
    curl; \
  docker-php-ext-install pdo_mysql mbstring zip pcntl

RUN set -e; \
  a2enmod rewrite; \
  sed -i 's!/var/www/html!/var/www/public!g' /etc/apache2/sites-available/000-default.conf; \
  curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer; \
  mkdir -p /var/log/php; \
  chmod a+rwX /var/log/php

COPY . /var/www

An image is the final state of overlaying all layers on one another, and represents a service that can be run. The image is idempotent, which means that running it with the same arguments will always result in the same state. This means that they are predictable, which is a huge boon to DevOps, as the state is known at launch.

Docker experts recommend any given image exposes exactly one service: a web server, a language runtime, a database server, etc. In the above example, a web server is exposed (Apache, with mod_php). Another approach to having a web server with PHP would be to have two images: one running php-fpm (FastCGI Process Manager), another running Apache or nginx, consuming the FastCGI process exposed by the other image.

And with this, we can finally see the need for orchestration for Docker, as it gives us our most basic example for Docker orchestration: a php-fpm service and a web server. Most PHP applications will require even more than this, as they will require a relational database, perhaps a caching service such as Redis, potentially a search/log service such as ElasticSearch, potentially a document database such as MongoDB, and much, much more.

Back to top

Orchestration For Docker

Since each Docker container represents a single service, we need to orchestrate many containers. This will involve spinning up each of them, but also such things as networking them, providing DNS for each service in the application, persisting and/or sharing filesystem volumes, and more.

Sound difficult?

It can be. But there are a number of different systems you can use, with varying amounts of complexity:

  • Compose, which runs on a single physical machine.
  • Stack/Swarm, which can orchestrate across multiple machines.
  • Kubernetes (k8s), which can describe entire systems in granular detail, and includes features such as monitoring, logging, and autoscaling.

Let's take a brief tour through each.

Compose

Docker Compose allows you to describe a group of services using YAML, using a file named docker-compose.yml. Each service specifies either an image, or information on how to build an image, and then optionally provides additional context such as:

  • Runtime arguments
  • Environment variables
  • Ports to expose
  • Volumes to map and/or use
  • Healthcheck configuration

In addition to services, you can describe networks and volumes. Network configuration in Compose is rare, and generally unnecessary. Volume configuration is useful for providing persistence between invocations, or for mapping local directories and files into the image.

YAML, while a sometimes tricky and difficult to debug format, provides several huge benefits:

  • It can describe trees and hierarchical data.
  • It can be easily versioned in version control systems.

This latter point is key for orchestration: if something breaks, roll back to a previous version and re-deploy. Because descriptions are idempotent, going back to a known-good configuration means going back to a known good application state.

Let's take a look at a Compose file. The following describes an nginx web server communicating with a php-fpm process that in turn uses Redis and MariaDB.

version: '3.7'

services:

  web-server:
    container_name: ws
    depends_on:
      - php-fpm
    image: nginx
    ports:
      - "80:80"
    volumes:
      - ./nginx/:/etc/nginx/conf.d/

  php-fpm:
    build:
      context: .
      dockerfile: Dockerfile.fpm
      args:
        TIMEZONE: 'Europe/London'
        ZENDPHP_VERSION: 8.0
        ZEND_EXTENSIONS_LIST: 'mysql redis mbstring'
    depends_on:
      - db
      - redis
    expose:
      - 9000
    volumes:
      - .:/var/www/html/
    healthcheck:
      test: ["CMD-SHELL", "/usr/local/bin/fpm-healthcheck.sh"]
      interval: 10s
      timeout: 2s
      retries: 2

  db:
    image: mariadb:latest
    environment:
      MYSQL_ROOT_PASSWORD: rootpw
    volumes:
      - db-data:/var/lib/mysql

  redis:
    image: redis:alpine
    volumes:
      - redis-data:/data

volumes:
  db-data:
  redis-data:

The above definition will spin up four containers, one each for the web server, php-fpm pool, relational database, and Redis. Three of the services use established images, while one builds from a provided Dockerfile. The web server and php-fpm services map in local filesystem entries to directories in the containers, while the database and redis services map virtual volumes that perist data for us.

Internally, each of the services can reach the others using the service name as the host name. The ports each images expose are available within the private network. The web-server service goes further, and maps its port 80 to the host machine's port 80, effectively exposing it to the outside world.

To run a Compose application:

$ docker-compose up -d

This starts the application; the -d switch starts it as a daemonized, or detached, process, meaning it will run in the background. You can stop the services with:

$ docker-compose down

Or restart it with:

$ docker-compose restart

Each command also allows you to manipulate the status of any single service in the application as well.

The benefit of such orchestration is that:

  • Details such as virtual private networks on which the services can communicate are implicit, and establish a de facto demilitarized zone for the application.
  • We can make changes on the local host filesystem, and see them affect the running PHP application, due to the fact that the local filesystem is mapped into the container.
  • We can ensure that services required for another service to run are fired up first (see the depends_on configuration above).

Compose is an excellent choice for developers. Images are generally very fast to launch (milliseconds!), the specification is easily versioned in a version control system, and it helps ensure that what is run in production is the same as in development.

The drawback to Compose is that it works only on the host system, and cannot be deployed across multiple machines to provide redundancy.

Swarm and Stack

To expand on what Compose already provides, Docker added two other sets of services: Swarm and Stack. Swarm, or swarm mode, allows deploying and managing applications across multiple Docker Engines — in other words, multiple machines. A swarm is decentralized and consists of one or more nodes, each running as a manager or worker or both. Applications run in a swarm can scale up or down and resolve for scenarios where one or more workers or managers goes down. The swarm will handle service discovery, load balancing, and more.

Generally speaking, you will create a manager node for a swarm using the following:

$ docker swarm init --advertise-addr {manager IP addr}

This command will provide a token you can use to allow worker nodes to join the swarm (you can run docker swarm join-token worker to get tokens later if you do not write it down).

From there, you will add other machines to the swarm:

$ docker swarm join --token {token} {manager IP addr}:2377

(The above command is provided to you when you create the manager node.)

If you want to add more managers, you can use docker swarm join-token manager to get instructions and manager-specific tokens; the new managers use the same join syntax as workers, however.

A stack is an application you deploy to a swarm. It is roughly equivalent to what we saw in the previous section for Compose, and, in fact, uses basically identical syntax; the main difference is that the build option of a Compose definition is ignored by Stack. When creating a Stack definition, you will thus also specify the image, which is the name of the built image, including where it will live:


  php-fpm:
    image: cr.example.com/webapplication/fpm
    build:
      context: .
      dockerfile: Dockerfile.fpm
      args:
        TIMEZONE: 'Europe/London'
        ZENDPHP_VERSION: 8.0
        ZEND_EXTENSIONS_LIST: 'mysql redis mbstring'

The main differentiator of Stack is that you deploy a stack to a swarm.

The workflow for Swarm and Stack will look something like this:

  • Build a Compose file and test it locally. This will generally involve one or more docker-compose build and/or docker-compose up operations.
  • Push your generated images, using docker-compose push.
  • Deploy the stack to the swarm, using docker stack deploy --compose-file docker-compose.yml {stack name}. This step must be done from a manager node.

Swarm and Stack are elegant ways to manage the problem of "What happens if my machine goes down?" By having multiple nodes in play, you can manage redundancy, ensuring your application stays up.

Kubernetes

What if your application needs to scale based on amount of traffic? Or you need multiple segregated networks? Or need to orchestrate things like shared storage volumes in a cloud environment?

Kubernetes (k8s) exists to answer these questions.

I won't mince words here: Kubernetes is difficult. Because it can configure things at a granular level, including low-level details of networking and volume sharing, it takes a lot of time and expertise to understand it and be proficient at managing it. However, once you have something working, it can manage a ton of details for you that Compose and Swarm/Stack simply do not expose.

Compose and Stack/Swarm define applications which consist of one or more services (containers), as well as shared storage (volumes) and networking. The k8s equivalent is a "pod". Each pod is run a node, or worker machine, which may be either a physical or virtual machine. Nodes can handle multiple pods, and k8s will handle distribution of pods across nodes.

Like Compose, k8s uses YAML for configuration. Unlike Compose, k8s uses separate files to describe each service and how to deploy each service. Additionally, k8s exposes a ton more features you can use around deployment than Compose or Stack; you can indicate how many instances to bring up, how to scale those instances, and more.

A service describes network behavior, and generally details how ports are exposed, and what name or names to use to resolve the container so it can be reached by other containers.

apiVersion: v1
kind: Service
metadata:
  namespace: esc
  name: php-fpm
  labels:
    io.esc.service: php-fpm
spec:
  type: NodePort
  ports:
    - name: "9000"
      port: 9000
      targetPort: 9000
  selector:
    io.esc.service: php-fpm

A deployment describes how to orchestrate service instances: how many replicas to deploy, how to balance between them, how to check their health, and more.

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: esc
  annotations:
    phpversion: 8.0
  labels:
    io.esc.service: php-fpm
  name: php-fpm
spec:
  replicas: 3
  selector:
    matchLabels:
      io.esc.service: php-fpm
  strategy: {}
  template:
    metadata:
      labels:
        io.esc.service: php-fpm
    spec:
      containers:
        - image: eu.gcr.io/zendphp-313619/esc-fpm
          livenessProbe:
            exec:
              command:
                - /usr/local/bin/fpm-healthcheck.sh
            failureThreshold: 2
            periodSeconds: 10
            timeoutSeconds: 2
          name: php-fpm
          resources: {}
      restartPolicy: Always
status: {}

When you run k8s, you pass it a directory with configuration files, and it then parses them to create a graph detailing the entire application. From there, it determines the order in which services should be deployed, and provides all network wiring and volume mapping per the configuration.

The beauty of a system like this is that it can be deployed on a single node or multiple nodes, and k8s manages all the networking and discovery details for you. It can be deployed to a single bare-metal machine, or a cluster of machines in a public cloud, and managed the exact same way for each.

Back to top

ZendPHP and Docker

So, what does this have to do with Zend?

We recently launched a number of tools around Docker orchestration.

First, we launched a ZendPHP Docker Container Registry. It provides ready-to-use Docker images for both community-supported PHP versions (at the time of writing, PHP 7.3, 7.4, and 8.0), as well as LTS versions for customers with ZendPHP or Zend Server licenses. Our documentation also details a customizable Dockerfile that allows you to easily specify additional system packages, PHP extensions, and installation/preparation scripts for your images in a way that will work across the variety of operating systems our images support. Learn more in our blog PHP Docker Images Tips and Tricks.

Second, we now provide sample Compose, Swarm/Stack, and Kubernetes templates that demonstrate setting up Redis-backed session clusters using ZendPHP containers.

These templates help you get up and running with container-based orchestration in minutes, and can be further customized for your application-specific needs.

If you have need for LTS versions of PHP or would like support or services around orchestration, we invite you to contact us via the following links to start a discussion around how we can help your business. You can also start a free trial of ZendPHP. 

Try ZendPHP Free

 

 

Additional Resources

Back to top