Magento 2 on Kubernetes – how do I do that?

In this article, we’ll take a closer look at what it takes to run Magento 2 on Kubernetes. Let’s dive in!

Prerequisites

This article assumes you have the fundamental knowledge of operating Magento 2, containers (Docker), and basic concepts of Kubernetes.

You’ll need a running Kubernetes cluster with at least 4 CPUs and 8 GB of RAM available. For testing the cluster locally, Minikube is a great option.

We’ll also need an ingress controller (NGINX) so that we’re able to access Magento once installed – see the documentation for deployment instructions.

If you decide to install Minikube, you can start a cluster with required capabilities by running make minikube in the companion repository.

Additionally, we’ll be using the following tools:

  • kubectl with the correct context configured
  • A standalone version of kustomize
  • (optional but recommended) make

Once the cluster and the tools are in place, we can start deploying Magento. You'll find all necessary files in our Magento 2 on Kubernetes GitHub repository.

Let’s go through the deployment process step by step.

Step 1: Create a minimal Magento 2 deployment

Magento

Naturally, we’ll need a container running Magento itself, so we might just as well start with it.

But first, we need to go through some of the aspects one needs to consider when running any PHP web application on Kubernetes, to shed light on some architectural choices done in this article.

PHP web application pod patterns

There are different patterns for deploying a PHP web application on Kubernetes – from single-process, single-container through multi-process containers to multiple single-process ones.

All-in-one
Apache 2 with mod_php
Apache with mod_php in a single container

The most straightforward arrangement is to have a single container running Apache 2 with mod_php in a single process – the arrangement quite commonly used in tutorials. While an all-in-one container is the easiest to configure and manage, you still might want to consider using NGINX to serve static content after all – either as a dedicated pod or a caching reverse-proxy.

NGINX + PHP-FPM in a single container
NGINX and PHP-FPM
NGINX and PHP-FPM in a single container

If you decide to run NGINX, you’ll need PHP-FPM with it. You’ll also need either a custom script or a process manager (e.g., supervisord) to run them both in a single container. While it’s fine to do so in some cases, as a rule of thumb, having more than one process in a container should be avoided. However, it does give you the benefit of better performance and having all of the code in a single container, so it might be worth considering.

It’s ok to have multiple processes but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application. You can connect multiple containers using user-defined networks and shared volumes.

Single Pod running two containers
NGINX and PHP-FPM in separate containers
NGINX and PHP-FPM in separate containers, but in a single pod

In this configuration, NGINX and PHP-FPM are running in separate containers, communicating over the network instead of a socket. This way, we don’t need supervisord anymore and can assign specific readiness and liveliness probes to each container as well as have more control over resource allocation.

There is one caveat, though: we need to make sure NGINX can access static and media files. It can be achieved in two ways: by either creating a custom NGINX image with project files inside or by sharing project files between NGINX and PHP containers via a volume. The latter requires creating a volume shared between containers in the Pod (emptyDir would be just fine here) and copying the files from the PHP container to the volume upon pod initialization (i.e., in an init container).

In this article, we’ll use the second method, since it lets us avoid having to version two images and to make sure the deployed versions are always in sync. Implementation details are described in one of the sections below.

Web server and PHP in separate Pods
NGINX and PHP-FPM in separate Pods
NGINX and PHP-FPM in separate pods

This pattern is quite similar to the previous one, except it allows us to scale PHP and NGINX Pod independently. We cannot use an emptyDir volume here to share files between the pods and need to configure proper persistent volumes for static assets and media files.

Which is best?

On the one hand, single-process Apache+PHP containers are easier to manage, on the other NGINX has a reputation for its superior performance when serving static content, and putting it and PHP-FPM on separate pods allows you to scale them independently. Even so, this scalability comes at the cost of higher complexity, so it’s always best to run benchmarks yourself, taking factors such as expected traffic patterns, CDN use, caching into consideration.

Unfortunately, there’s no silver bullet, and the decision on which pattern to use on a specific project is up to you.

Magento Docker image

As discussed above, we’ll use a Docker image based on php:7.2-fpm and a plain NGINX:mainline image.

Configuration from environment

When deploying Kubernetes applications, it’s usually best to configure each program by setting environment variables on its container.

While it’s possible to mount ConfigMaps to containers as regular configuration files, this isn’t ideal. Different applications use different configuration formats, and frequently the values must match across multiple applications. Managing configuration this way quickly becomes unnecessarily complicated.

Conversely, with environment variables, you only need to define everything once, as key-value pairs. You can then pass those values by just referring to its key (variable name). This way, you have a single source of truth for each configuration value needed.

Environment variables in Magento 2

One of the features of Magento 2 is the ability to pick up settings from the environment – setting an environment variable like <SCOPE>__<SYSTEM__VARIABLE__NAME> has the same effect as writing the setting in app/etc/config.php.

For example, if one wants to configure Elasticsearch as the search engine, setting an environment variable CONFIG__DEFAULT__CATALOG__SEARCH__ENGINE=elasticsearch6 instructs Magento to set the catalog search engine option to “Elasticsearch 6” for the Default Scope. It also locks this setting in the admin panel to prevent accidental changes.

Unfortunately, this feature cannot be used to control environment-specific settings like database credentials. There are a few ways to work around it, though:

  • Mount app/etc/env.php from a ConfigMap – not an ideal solution, since Magento checks for write access to the file at various points. It could be copied from a ConfigMap during Pod initialization to allow write access. (https://github.com/magento/magento2/issues/4955)
  • Use bin/magento to pick configuration up from environment variables, and pass it to Magento during Pod initialization. It’s mostly the same as configuring a Magento instance via CLI, but automated. It takes quite some time, though, to save the configuration, which considerably prolongs the time each Magento Pod takes to start.
  • Modify app/etc/env.php and include it in the container image. Since env.php is a regular PHP file that must return an array with the configuration, PHP’s built-in getenv() function is perfect for taking values from the environment during execution, e.g., 'dbname' => getenv('DB_NAME'). This method ensures Magento can write to the env.php file while adding no extra time during Pod initialization.

Managing logs

One more issue to consider when deploying Magento 2 on Kubernetes is to make sure all relevant logs survive container restarts and are easily accessible.

The simplest solution would be to use a PersistentVolume for var/log and var/reports directories. A volume solves the issue of log persistence but may cause performance issues with many Magento instances writing to the same files. Moreover, the logs themselves quickly become too long to navigate efficiently.

To satisfy both requirements, we’ll use the sidecar pattern – a separate container responsible for reading log files as they grow and outputting their contents to stdout for a separate tool (Elastic stack, Fluentd) to store and process.

Tip: In this example, we’ll use one container per log file running tail -f to put logs to stdout. While this works quite well for a vanilla Magento deployment, it doesn’t scale very well with more files to process.

A better solution would be to leverage Magento’s PSR-3 compatibility and configure all relevant log handlers to log directly to stdout/stderr. Doing so would satisfy Factor XI of the Twelve-factor App methodology and make sidecar containers obsolete.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: magento-php
  labels:
    app: magento-php
    k8s-app: magento
spec:
  selector:
    matchLabels:
      app: magento-php
  template:
    metadata:
      labels:
        app: magento-php
        k8s-app: magento
    spec:
      containers:
      - image: kiweeteam/magento2:vanilla-2.3.2-php7.2-fpm
        name: magento-php
        volumeMounts:
        - name: logs
          mountPath: /var/www/html/var/log
      - image: busybox
        name: system-log
        command: ["/bin/sh"]
        args:
        - -c
        - |
          touch /var/www/html/var/log/system.log
          chown 33:33 /var/www/html/var/log/system.log
          tail -n+1 -f /var/www/html/var/log/system.log
        resources:
          limits:
            cpu: 5m
            memory: 64Mi
          requests:
            cpu: 5m
            memory: 64Mi
        volumeMounts:
        - name: logs
          mountPath: /var/www/html/var/log
      volumes:
      - name: logs
        emptyDir: {}
Part of the Magento Deployment manifest where sidecars are defined

Cron

Magento relies on cron jobs for several of its features.

In a typical deployment scenario, you’d assign one of the hosts to run cronjobs and configure them directly via crontab. However, such setup would not work with Kubernetes since there’s no single Magento instance (container) that is guaranteed to be always running. That’s why we’ll use a Kubernetes CronJob running bin/magento cron:run every minute.

This way, we delegate the responsibility of running cron to Kubernetes, which starts a new temporary container and runs given commands until completion – no need to worry about keeping one of the Magento instances always running.

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: magento-cron
  namespace: default
spec:
  schedule: '* * * * *'
  jobTemplate:
    metadata:
      creationTimestamp: null
    spec:
      template:
        metadata:
          labels:
            app: magento-cron
            k8s-app: magento
        spec:
          containers:
          - name: magento-cron
            image: kiweeteam/magento2:vanilla-2.3.2-php7.2-fpm
            command: ["/bin/sh"]
            args:
            - -c
            - |
              php bin/magento cron:run
            envFrom:
            - configMapRef:
                name: config
            - configMapRef:
                name: aux
            resources:
              limits:
                cpu: 500m
                memory: 4Gi
              requests:
                cpu: 50m
                memory: 1Gi
          restartPolicy: Never
  concurrencyPolicy: Forbid
  startingDeadlineSeconds: 600
  failedJobsHistoryLimit: 20
  successfulJobsHistoryLimit: 5
CronJob manifest to run Magento 2 cron jobs on Kubernetes

Tip: When running Magento cron as Kubernetes CronJobs, make sure each of them is configured to run as a single process. This can easily be done by setting the following environment variables:

CONFIG__DEFAULT__SYSTEM__CRON__INDEX__USE_SEPARATE_PROCESS=0
CONFIG__DEFAULT__SYSTEM__CRON__DEFAULT__USE_SEPARATE_PROCESS=0
CONFIG__DEFAULT__SYSTEM__CRON__CONSUMERS__USE_SEPARATE_PROCESS=0
CONFIG__DEFAULT__SYSTEM__CRON__DDG_AUTOMATION__USE_SEPARATE_PROCESS=0

Otherwise, the cron container may terminate before completing running all scheduled jobs.

Additional Jobs

Another Kubernetes Object type we’ll make use of in this project is a Job.

A Job runs one or more Pods responsible for completing a task. We’ll be using Jobs to run tasks that are required when deploying Magento, but wouldn’t be suitable to put in the init containers for Magento Pods themselves:

  • magento-unpack is responsible for unpacking all static assets (baked into the container image during build) into a volume shared between PHP and NGINX. Since the assets are precisely the same for each instance of a given version, we need to do this only once per version deployment.
  • magento-install automates application installation: it installs database schema, generates performance fixtures that we use as sample data for demonstration, and ensures all indexes are in “On Schedule” update mode. In a real-life scenario, you’d probably run bin/magento setup:upgrade here instead to update schema on each new deployment.
apiVersion: batch/v1
kind: Job
metadata:
  name: magento-unpack
spec:
  template:
    metadata:
      name: unpack
      labels:
        app: magento-unpack
        k8s-app: magento
    spec:
      containers:
      - name: magento-unpack
        image: kiweeteam/magento2:vanilla-2.3.2-php7.2-fpm
        command: ["/bin/sh"]
        args:
        - -c
        - |
          /bin/bash <<'EOF'
          rsync -avc /var/www/html/pub/static/frontend/ /tmp/static/frontend/ --delete
          rsync -avc /var/www/html/pub/static/adminhtml/ /tmp/static/adminhtml/ --delete
          rsync -avc /var/www/html/pub/static/deployed_version.txt /tmp/static/deployed_version.txt --delete
          EOF
        volumeMounts:
        - name: static
          mountPath: /tmp/static
      restartPolicy: OnFailure
      volumes:
      - name: static
        persistentVolumeClaim:
          claimName: static
magento-unpack Job
apiVersion: batch/v1
kind: Job
metadata:
  name: magento-install
spec:
  template:
    metadata:
      name: install
      labels:
        app: magento-install
        k8s-app: magento
    spec:
      containers:
      - name: magento-setup
        image: kiweeteam/magento2:vanilla-2.3.2-php7.2-fpm
        command: ["/bin/sh"]
        args:
        - -c
        - |
          /bin/bash <<'EOF'
          bin/install.sh
          php bin/magento setup:perf:generate-fixtures setup/performance-toolkit/profiles/ce/small.xml
          magerun index:list | awk '{print $2}' | tail -n+4 | xargs -I{} magerun index:set-mode schedule {}
          magerun cache:flush
          EOF
        envFrom:
        - configMapRef:
            name: config
        volumeMounts:
        - mountPath: /var/www/html/pub/media
          name: media
      volumes:
      - name: media
        persistentVolumeClaim:
          claimName: media
      restartPolicy: OnFailure
magento-install Job

Tip: Bear in mind that since Job’s pod template field is immutable, it’s impossible to update the Jobs with each new release. Instead, you need to make sure to delete the old ones and create new ones for each revision deployed.

Database

For the database, we’ll simply use a StatefulSet running Percona 5.7 with the data stored in a PersistentVolume.

A plain StatefulSet works well for a small local/development cluster, but you might consider setting up an Xtradb cluster (e.g., using Percona Kubernetes Operator) for larger deployments. Such a solution requires more resources and adds complexity, so make sure to run appropriate benchmarks to ensure benefits are worth the investment.

apiVersion: v1
kind: Service
metadata:
  name: db
  labels:
    app: db
    k8s-app: magento
spec:
  selector:
    app: db
  ports:
  - name: db
    port: 3306

---

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: db
spec:
  selector:
    matchLabels:
      app: db
  serviceName: db
  template:
    metadata:
      labels:
        app: db
        k8s-app: magento
    spec:
      containers:
      - args:
        - --max_allowed_packet=134217728
        volumeMounts:
        - mountPath: /var/lib/mysql
          name: data
        env:
        - name: MYSQL_DATABASE
          valueFrom:
            configMapKeyRef:
              name: config
              key: DB_NAME
        - name: MYSQL_PASSWORD
          valueFrom:
            configMapKeyRef:
              name: config
              key: DB_PASS
        - name: MYSQL_USER
          valueFrom:
            configMapKeyRef:
              name: config
              key: DB_USER
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            configMapKeyRef:
              name: config
              key: DB_ROOT_PASS
        image: percona:5.7
        name: db
        resources:
          requests:
            cpu: 100m
            memory: 256Mi
      restartPolicy: Always
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 10Gi
Database StatefulSet and Service

Ingress

At this point, all that we’re missing to have a working Magento 2 instance on Kubernetes is a way to access the frontend.

We could simply expose the magento-web Service by making its type either NodePort to expose it on a specific port or LoadBalancer to expose it via an external load balancer. In this case, however, we’ll use an Ingress Controller – this way, we’ll get TLS termination out-of-the-box, along with the possibility to manage the TLS certificates in a declarative manner (e.g., using cert-manager). We could even expose additional services with routing based on paths or (sub-)domains, should we decide to do so.

Assuming that NGINX Ingress Controller already installed, all we need to do here is to create an Ingress definition that will proxy all traffic to the magento-web Service on HTTP port.

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: "nginx"
    kubernetes.io/tls-acme: "false"
  name: main
spec:
  backend:
    serviceName: magento-web
    servicePort: http
Ingress manifest

Putting it all together

To deploy the stack discussed so far, run make step-1 in the companion repository. Make will automatically download any dependencies needed to run this step and deploy everything to your Kubernetes cluster.

apiVersion: v1
kind: Service
metadata:
  name: magento-web
  labels:
    app: magento-web
    k8s-app: magento
spec:
  ports:
  - name: "http"
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: magento-web

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: magento-web
  labels:
    app: magento-web
    k8s-app: magento
spec:
  selector:
    matchLabels:
      app: magento-web
  strategy:
    rollingUpdate:
      maxSurge: 50%
      maxUnavailable: 30%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: magento-web
        k8s-app: magento
    spec:
      containers:
      - image: nginx:mainline
        imagePullPolicy: Always
        name: magento-web
        ports:
        - containerPort: 80
          protocol: TCP
        resources:
          limits:
            cpu: 10m
            memory: 128Mi
          requests:
            cpu: 10m
            memory: 128Mi
        volumeMounts:
        - mountPath: /etc/nginx/conf.d/default.conf
          name: nginx-config
          subPath: default.conf
        - mountPath: /var/www/html/magento2.conf
          name: nginx-config
          subPath: magento2.conf
        - name: media
          mountPath: /var/www/html/pub/media
        - mountPath: /var/www/html/pub/static
          name: static
      volumes:
      - configMap:
          defaultMode: 420
          name: nginx
        name: nginx-config
      - name: media
        persistentVolumeClaim:
          claimName: media
      - name: static
        persistentVolumeClaim:
          claimName: static
magento-web Deployment and Service manifests
apiVersion: v1
kind: Service
metadata:
  name: magento-php
  labels:
    app: magento-php
    k8s-app: magento
spec:
  ports:
  - name: "fpm"
    port: 9000
    protocol: TCP
    targetPort: 9000
  selector:
    app: magento-php

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: magento-php
  labels:
    app: magento-php
    k8s-app: magento
spec:
  replicas: 1
  selector:
    matchLabels:
      app: magento-php
  strategy:
    rollingUpdate:
      maxSurge: 50%
      maxUnavailable: 30%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: magento-php
        k8s-app: magento
    spec:
      containers:
      - image: kiweeteam/magento2:vanilla-2.3.2-php7.2-fpm
        imagePullPolicy: IfNotPresent
        name: magento-php
        ports:
        - containerPort: 9000
          protocol: TCP
        readinessProbe:
          failureThreshold: 5
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          tcpSocket:
            port: 9000
          timeoutSeconds: 1
        livenessProbe:
          failureThreshold: 5
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          tcpSocket:
            port: 9000
          timeoutSeconds: 1
        resources:
          limits:
            cpu: 250m
            memory: 1Gi
          requests:
            cpu: 250m
            memory: 1Gi
        envFrom:
        - configMapRef:
            name: config
        - configMapRef:
            name: aux
        volumeMounts:
        - name: logs
          mountPath: /var/www/html/var/log
        - name: media
          mountPath: /var/www/html/pub/media
        - name: static
          mountPath: /var/www/html/pub/static
      - image: busybox
        imagePullPolicy: IfNotPresent
        name: system-log
        command: ["/bin/sh"]
        args:
        - -c
        - |
          touch /var/www/html/var/log/system.log
          chown 33:33 /var/www/html/var/log/system.log
          tail -n+1 -f /var/www/html/var/log/system.log
        resources:
          limits:
            cpu: 5m
            memory: 64Mi
          requests:
            cpu: 5m
            memory: 64Mi
        volumeMounts:
        - name: logs
          mountPath: /var/www/html/var/log
      - image: busybox
        imagePullPolicy: IfNotPresent
        name: exception-log
        command: ["/bin/sh"]
        args:
        - -c
        - |
          touch /var/www/html/var/log/exception.log
          chown 33:33 /var/www/html/var/log/exception.log
          tail -n+1 -f /var/www/html/var/log/exception.log
        resources:
          limits:
            cpu: 5m
            memory: 64Mi
          requests:
            cpu: 5m
            memory: 64Mi
        volumeMounts:
        - name: logs
          mountPath: /var/www/html/var/log
      - image: busybox
        imagePullPolicy: IfNotPresent
        name: debug-log
        command: ["/bin/sh"]
        args:
        - -c
        - |
          touch /var/www/html/var/log/debug.log
          chown 33:33 /var/www/html/var/log/debug.log
          tail -n+1 -f /var/www/html/var/log/debug.log
        resources:
          limits:
            cpu: 5m
            memory: 64Mi
          requests:
            cpu: 5m
            memory: 64Mi
        volumeMounts:
        - name: logs
          mountPath: /var/www/html/var/log
      volumes:
      - name: logs
        emptyDir: {}
      - name: media
        persistentVolumeClaim:
          claimName: media
      - name: static
        persistentVolumeClaim:
          claimName: static
magento-php Deployment and Service manifests

By now, we should have a working, although bare-bones, Magento deployment on Kubernetes. We’re still missing some essential parts, namely a decent search engine and cache. Let’s carry on and add them!

Step 2: Elasticsearch

What is an online store without a way to search it? With Magento 2 supporting Elasticsearch out-of-the-box, all we need is to deploy Elasticsearch itself to ensure the customers can easily find what they’re looking for. Let’s get to it.

While it would be perfectly fine to create a custom Deployment or StatefulSet, we’ll leverage the power of Elastic Cloud on Kubernetes Operator to simplify the process.

Elasticsearch Cluster

After installing the Operator, we need to tell it about the desired Elasticsearch cluster architecture and provide Magento with the information on how to connect to that cluster.

Due to constrained resources (Minikube on a laptop), we’ll go for a simple, single-node Elasticsearch 6.x cluster with somewhat limited resources. Since it is not exposed to any public networks, we can disable authentication and TLS for simplicity.

This setup is enough for our purposes, but for a larger project with higher traffic, you might consider setting up an Elastic cluster with more nodes and more resources each. And again, it’s always a good idea to run benchmarks specific to your project, to make sure you have the configuration that works for you.

apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
  name: elasticsearch
  namespace: default
spec:
  version: 6.8.5
  nodeSets:
  - name: elasticsearch
    count: 1
    config:
      node.master: true
      node.data: true
      node.ingest: true
      node.store.allow_mmap: false
      xpack.security.authc:
        anonymous:
          username: anonymous
          roles: superuser
          authz_exception: false
    podTemplate:
      spec:
        containers:
        - name: elasticsearch
          env:
          - name: ES_JAVA_OPTS
            value: -Xms512m -Xmx512m
          resources:
            requests:
              memory: 1Gi
              cpu: 0.5
            limits:
              memory: 1Gi
              cpu: 1
  http:
    tls:
      selfSignedCertificate:
        disabled: true
Elasticsearch manifest using Elastic Cloud on Kubernetes

To deploy and configure Elastic Stack, run make step-2.

Tip: Similarly, Elastic Cloud on Kubernetes Operator can be used to deploy another Elasticsearch cluster along with Kibana to manage logs.

Magento configuration

All that’s left is to point Magento to the newly created Elasticsearch instance. We can easily do this by extending aux.env configuration with

CONFIG__DEFAULT__CATALOG__SEARCH__ELASTICSEARCH6_SERVER_HOSTNAME=elasticsearch-es-http
CONFIG__DEFAULT__CATALOG__SEARCH__ELASTICSEARCH6_SERVER_PORT=9200
CONFIG__DEFAULT__CATALOG__SEARCH__ENGINE=elasticsearch6

and let Kustomize handle merging configuration files together and passing them to Magento as environment variables.

Step 3: Redis and auto-scaling

Having Elasticsearch configured in the previous section, we now have all the functionality-related pieces in place. You might have noticed, however, that Magento’s performance is less than stellar in this configuration. Worry not – we haven’t taken advantage of almost any caching yet!

In other words: we made it work, now let’s make it work fast with Redis and auto-scaling.

Redis

Redis plays two essential roles in any performant Magento 2 deployment:

  • Fast session storage to allow multiple application instances to keep track of session information between requests
  • Cache storage for internal Magento cache (e.g., configuration, layout, HTML fragments)

Here again, we’ll use a simple StatefulSet to run a single Redis instance with separate databases for sessions and cache. We don’t need to attach any PersistentVolumes, so we won’t.

Same as with Elasticsearch, the last thing we need to do is instruct Magento to use the newly deployed Redis instance. Just like before, we’ll add a few keys to aux.env and let Kustomize handle merging the pieces:

REDIS_CACHE_HOST=redis
REDIS_CACHE_PORT=6379
REDIS_CACHE_DB=0
REDIS_SESSION_HOST=redis
REDIS_SESSION_PORT=6379
REDIS_SESSION_DB=2

apiVersion: v1
kind: Service
metadata:
  name: redis
  labels:
    app: redis
    k8s-app: magento
spec:
  selector:
    app: redis
  ports:
  - name: redis
    port: 6379
    protocol: TCP
    targetPort: 6379

---

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis
spec:
  selector:
    matchLabels:
      app: redis
  serviceName: redis
  template:
    metadata:
      labels:
        app: redis
        k8s-app: magento
    spec:
      containers:
      - name: redis
        image: redis
        imagePullPolicy: Always
        resources:
          limits:
            cpu: 500m
            memory: 4Gi
          requests:
            cpu: 50m
            memory: 1Gi
Redis StatefulSet and Service manifests

Horizontal Pod Autoscalers

With Redis in place, we can now run multiple instances of Magento sharing session information and cache. While we could just increase replicas count when necessary in Magento’s Deployment manifest, why not make use of the full potential running Magento 2 on Kubernetes gives us? Let’s create Horizontal Pod Autoscalers instead and let Kubernetes figure out the optimum number at any given time.

To do so, we’ll create a new HorizontalPodAutoscaler. What it does is monitor resource usage of the Pods defined in scaleTargetRef, starting new ones when targetCPUUtilizationPercentage goes above the defined threshold until it reaches maxReplicas.

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: magento-php
spec:
  maxReplicas: 5
  minReplicas: 2
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: magento-php
  targetCPUUtilizationPercentage: 75

---

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: magento-web
spec:
  maxReplicas: 3
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: magento-web
  targetCPUUtilizationPercentage: 75
HorizontalPodAutoscaler manifests

Tip: In this article, we’ve purposely assigned limited resources to Magento Pods to make it easier to show auto-scaling in action. When deploying Magento 2 on Kubernetes in a real-life scenario, you should make sure to tune both PHP settings and Pod resource constraints, as well as scaling rules in the HorizontalPodAutoscaler configuration.

Like before, to deploy Redis and auto-scalers, simply run make step-3.

Step 4: Varnish

The last piece of the puzzle is adding a caching reverse-proxy to take some of the load off Magento. Naturally, we’ll use Varnish, as it’s supported out-of-the-box.

Not unlike in the previous steps, we’ll start with creating a Varnish Deployment. Two notable things here are that we expose not one, but two ports and run custom command to start the container, first starting varnishd in daemon mode, and then running varnishncsa.

Exposing two ports allows us to configure simple access rules in Varnish VCL, letting Magento clear cache using one, while the other can be safely exposed to the outside world.

Next, we need to tell Magento how to connect to Varnish, by extending aux.env configuration file as before:

VARNISH_HOST=varnish
VARNISH_PORT=80

apiVersion: v1
kind: Service
metadata:
  name: varnish
  labels:
    app: varnish
    k8s-app: magento
spec:
  selector:
    app: varnish
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
  - name: proxy
    port: 6091
    protocol: TCP
    targetPort: 6091

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: varnish
spec:
  selector:
    matchLabels:
      app: varnish
  replicas: 1
  template:
    metadata:
      labels:
        app: varnish
        k8s-app: magento
    spec:
      containers:
      - image: varnish:6.2
        name: varnish
        command: ["/bin/sh"]
        args:
          - -c
          - |
            varnishd -a :80 -a :6091 -f /etc/varnish/default.vcl -s default,512M;
            varnishncsa -F '%h %l %u %t "%r" %s %b "%{Referer}i" "%{User-agent}i" %{Varnish:handling}x'
        ports:
        - containerPort: 80
        - containerPort: 6091
        resources:
          requests:
            cpu: 50m
            memory: 512Mi
        env:
        - name: VARNISH_BACKEND_HOST
          value: web
        - name: VARNISH_BACKEND_PORT
          value: "80"
        volumeMounts:
        - name: config
          mountPath: /etc/varnish/default.vcl
          subPath: default.vcl
      restartPolicy: Always
      volumes:
      - name: config
        configMap:
          name: varnish
Varnish Deployment and Service manifests

Lastly, we want the Ingress to route all incoming requests to Varnish. Doing so requires changing the destination Sevice specified in the Ingress definition from before. A straightforward way to update the Ingress definition is to use Kustomize’s patchesJson6902.

- op: replace
  path: /spec/backend/serviceName
  value: varnish
- op: replace
  path: /spec/backend/servicePort
  value: 6091
step-4/patch/ingress.yaml

Tip: While Varnish excels at taking some of the load off the other components and improving the performance of rarely changing pages, it does not improve the performance of the interactive elements such as shopping cart, checkout, or customer area.

To deploy and configure Varnish, run make step-4.

Summary

So there you have it: an overview of all the essentials needed to run Magento 2 on Kubernetes, with Magento deployment configured via environment variables, cronjobs, Elasticsearch, Redis, autoscaling and Varnish.

All manifests and configuration files are managed by Kustomize so that they can be conveniently adjusted to the needs of any particular project.

While we wouldn’t recommend you run it on production as-is, it should give you a good starting point for creating a production-ready configuration specific to your project.

FacebookTwitterPinterest

Maciej Lewkowicz

Senior Full-stack Developer & DevOps Engineer

I believe there's love in new things and in old things there is wisdom. I love exploring all sorts of systems, especially putting enterprise-grade open source tools to use in small-scale projects, all the while being on a personal mission to make the Internet a little better for everyone.