
Shopware on Kubernetes: Build, Test and Debug
Tomasz GajewskiUpdated on 6 May 2025
Reading Time: 15 minutes
When running at scale, a Shopware infrastructure configuration elevates to a much higher level of complexity. Therefore, launching and testing the complete software stack (that is Shopware with all backing services) locally as well as in the continuous integration pipeline is advantageous. It is especially necessary to run integration and end-to-end tests in a clean and isolated environment to ensure their reliability and repeatability.
In this article I explain how to build and deploy a Shopware 6 platform to a local Kubernetes cluster, and how to debug PHP code directly in it.
All the code snippets and examples in this article belong to a proof-of-concept project named Shopware-Kube
, whose main capabilities are:
- Building container images with Shopware application server.
- Using Minikube to create a local Kubernetes cluster.
- Installing all components Shopware requires to function in the cluster.
- Deploying a Shopware container to the cluster.
- Running integration tests and end-to-end tests.
I shortened most of the code snippets for clarity, however I highly encourage you to analyze the full source code on GitHub yourself.
Prerequisites to start with local Shopware on Kubernetes
I assume that you have a basic understanding of Kubernetes, its components and the architecture.
In order to get started with the project you need to have the following tools installed on your computer:
- Docker: Docker Engine on Linux with BuiltKit add-on or Docker Desktop. Docker Desktop has the BuiltKit already pre-installed. It also supports other operating systems like macOS and Windows.
- Kubernetes distribution optimized for local development. My choice is Minikube. It’s lightweight and versatile enough for this project. It works on all major operating systems - Linux, macOS, and Windows, both in x86 and ARM CPU architecture (Windows ARM64 isn’t yet supported).
- Skaffold: to streamline the process of building, testing and deploying the Shopware application. Skaffold requires that kubectl, kustomize and helm are installed upfront.
- Ktunnel: to establish a reverse tunnel allowing Xdebug connections with your code editor.
Transform your Shopware implementation to cloud-native eCommerce
The term cloud-native
refers to a set of practices in architecting, building, and deploying applications to cloud computing infrastructure. One of the key objectives of this project is to enable the ability of a Shopware application to scale horizontally, that is to distribute it across multiple machines. Horizontal scaling is an important step to achieve high availability, which helps reduce the risk of downtimes on deployments or on application failures.
Here comes the Twelve-factor App–the principles that can be used for designing cloud-native applications. To align with these, the application should be stateless (not including temporary files for which use tmpfs
). Additionally, it must be containerized and optimized for quick startup and graceful shutdown.
Prepare cloud-native Shopware configuration
To align with the 11th factor of The Twelve-factor App (logs as streams) and the 4th factor (attached backing services), and to ensure optimal functionality within a cluster environment, Shopware needs specific configuration of:
- Cache storage
- User session storage
- Media storage
- Logs
Optimal Shopware cache configuration
The default cache configuration writes all cached data to a local file system, particularly into the var/cache
directory. This may be sufficient for simple, single-server setups. However, at scale, massive data redundancy and performance degradation would occur because each container would write into a cache in its own local filesystem. The solution is a key-value database like Redis that is shared with all Shopware containers to store an application and an HTTP cache. A typical configuration would be like the example below:
# config/packages/framework.yaml
framework:
cache:
default_redis_provider: "%env(string:REDIS_CACHE_OBJECT_URL)%"
app: cache.adapter.redis_tag_aware
system: cache.adapter.system
pools:
cache.tags:
adapter: cache.app
cache.object:
default_lifetime: '3600'
adapter: cache.app
tags: cache.tags
cache.http:
default_lifetime: '7200'
adapter: cache.adapter.redis_tag_aware
provider: "%env(string:REDIS_CACHE_HTTP_URL)%"
tags: cache.tags
Note that separate variables are used for different types of cache. It gives flexibility to use multiple Redis instances or multiple databases within a single Redis instance.
Incorporating Redis as session storage
As with the cache, the default user session storage (filesystem) shouldn’t be used at scale. This is because when a user makes several HTTP requests, each request can be processed by a different Shopware container. Then the user would end up starting a new session on almost every page request, therefore they would be unable to log in or add a product to cart. To overcome this issue, you can use a central database such as Redis. Redis offers better performance as an in-memory database than relational databases like MySQL or shared network filesystems like NFS or Ceph. To use Redis for sessions, include the following addition:
# config/packages/framework.yaml
framework:
...
session:
handler_id:
"%env(string:REDIS_SESSION_URL)%"
...
Configure Shopware to use S3-compatible object storage for media
Object storage with S3-compatible API has become the standard cloud-native method of storing media files. Fortunately, many service providers have included it in their offerings, and you're not limited to the major cloud providers only. There are also open-source alternatives for self hosting, such as MinIO or Rook. For the Shopware-Kube project, I decided to adopt MinIO because it is more straightforward to configure in a single-node cluster.
Here is an example of S3 storage configuration for public and private files in Shopware.
# config/packages/shopware.yaml
shopware:
...
filesystem:
public: &public-filesystem
type: "amazon-s3"
url: "%env(string:BUCKET_URL_PUBLIC)%"
config:
endpoint: "%env(string:BUCKET_ENDPOINT)%"
region: "%env(string:BUCKET_REGION_PUBLIC)%"
bucket: "%env(string:BUCKET_NAME_PUBLIC)%"
credentials:
key: "%env(string:AWS_ACCESS_KEY_ID)%"
secret: "%env(string:AWS_SECRET_ACCESS_KEY)%"
use_path_style_endpoint: true
options:
visibility: "public"
private: &private-filesystem
type: "amazon-s3"
config:
endpoint: "%env(string:BUCKET_ENDPOINT)%"
region: "%env(string:BUCKET_REGION_PRIVATE)%"
bucket: "%env(string:BUCKET_NAME_PRIVATE)%"
credentials:
key: "%env(string:AWS_ACCESS_KEY_ID)%"
secret: "%env(string:AWS_SECRET_ACCESS_KEY)%"
use_path_style_endpoint: true
options:
visibility: "private"
theme: *public-filesystem
sitemap: *public-filesystem
asset: *public-filesystem
...
The example above assumes theme and asset files are stored in the public bucket (Shopware default way). Therefore, you need to take care of compiling the storefront theme and copying new assets on every new version deployment to the cluster. It can be done using the Kubernetes Job resource, which needs to be created and executed on every deployment and deleted afterwards (if succeeded). The job has to run the shopware install command as in this example:
# deploy/bases/app/shopware-init.yaml
...
if bin/console system:is-installed; then
echo "Running Shopware updates."
bin/console system:install
else
echo "Running Shopware first time install."
bin/console system:install --basic-setup
fi
...
Configuring Shopware logs
Consider dozens or hundreds of containers running. Each one generates logs, and it becomes nearly impossible to search through them all, as it was with a single-server application. Theoretically, this difficulty you could solve by using a shared volume to store the logs. However, it would lead to performance issues due to networking filesystem latency and file locking due to multiple concurrent writes to a single file. A cloud-native way for storing logs is a central log index and a dashboard, from which you can find any log entry. Such a tool is particularly useful to aggregate the logs to detect anomalies and send out alerts.
In order to get the logs indexed, they have to be routed to the standard output and standard error streams (stdout
and stderr
) instead of files. It allows a log indexer to intercept them. Such a configuration satisfies the 11th factor of the 12-Factor App. Additionally, I recommend using the JSON formatter, which simplifies parsing the logs and their further analysis. An example Monolog configuration for Shopware is as follows:
# config/packages/monolog.yaml
monolog:
handlers:
main:
type: fingers_crossed
excluded_http_codes: [404, 405]
action_level: error
handler: nested
nested:
type: stream
path: "php://stderr"
level: error
formatter: 'monolog.formatter.json'
console:
type: console
process_psr_3_messages: false
channels: ["!event", "!doctrine"]
formatter: 'monolog.formatter.json'
Create local Kubernetes cluster with Minikube
Before deploying Shopware, you need to install a couple of Minikube addons:
storage-provisioner
: Enables automatic persistent volumes provisioning.default-storageclass
: Enables default storage class (hostpath
that is in a local filesystem).ingress
: Minikube-optimized NGINX Ingress Controller.ingress-dns
: A dedicated DNS server for ingresses.
Run create_cluster.sh
script to create a Minikube Kubernetes cluster with all mentioned addons enabled.
./create_cluster.sh
Register local test domains
The following local domains for Shopware installation are needed to be set:
shopware.test
(for the storefront)media.test
(for the public media)
The easiest way is to add them to your system hosts
file. Depending on your host operating system it is:
-
Linux (with Docker Engine)
echo $(minikube ip)' media.test shopware.test' | sudo tee -a /etc/hosts
-
macOS
echo '127.0.0.1 media.test shopware.test' | sudo tee -a /etc/hosts
-
Windows (run
cmd
as an administrator)echo 127.0.0.1 media.test shopware.test >> C:\Windows\system32\drivers\etc\hosts
Make local test domains available inside cluster
It will ensure that media and storefront hosts (shopware.test and media.test) can be resolved from inside of the cluster (for example, it is required by the theme:compile
command).
Minikube's recommended way is to edit the CoreDNS configuration and forward the .test
domains. First run minikube ip
to print the Minikube node IP address.
Next, open the CoreDNS ConfigMap for editing:
kubectl edit configmap coredns -n kube-system
Insert the DNS forwarding rule for .test domains.
test:53 {
errors
cache 30
// forward . <minikube ip>
forward . 192.168.49.2
}
Where the forward IP address is the result of minikube ip
.
So the configuration would look like this one:
apiVersion: v1
kind: ConfigMap
data:
Corefile: |
.:53 {
log
errors
health {
lameduck 5s
}
ready
...
}
test:53 {
errors
cache 30
forward . 192.168.49.2
}
...
Prepare Shopware Container (Docker) Image
When it comes to container size and number of libraries–the slimmer the better. With just a few very basic packages installed, a small container becomes faster to load and more secure. Following this principle I decided to try out FrankenPHP as an application server for Shopware. It has a Caddy HTTP server bundled in.
FrankenPHP builder can create a single executable binary file that contains your Shopware application, its dependencies, PHP, and Caddy server. A single binary approach results in a smaller and less complex application container compared to PHP-FPM with NGINX or PHP-FPM with Apache. This is particularly beneficial from a security standpoint. This is achieved by leveraging the static-php-cli project to build application executables. With the entire application in a single binary, it can be installed on top of a minimal container image like debian-slim
, alpine
, or wolfi-base
. The recommended choice by FrankenPHP contributors is Debian though. So I chose it.
Shopware Docker image build
The Dockerfile is broken down into the following stages:
- Prepare a Shopware builder.
- Build a production Shopware application by the application builder.
- Build a dev Shopware application by the application builder.
- Compile PHP and make the Shopware application binary executable.
- Make the target production image by copying the application binary onto it.
- Copy the dev Shopware application to the base development image.
Shopware Docker container image build stages
Declare the build configuration in the skaffold.yaml
file.
# ./skaffold.yaml
...
build:
artifacts:
- image: kiweeteam/franken-shopware
docker:
dockerfile: Dockerfile
target: app-prod
secrets:
- id: composer_auth
src: auth.json
- image: kiweeteam/franken-shopware-dev
docker:
dockerfile: Dockerfile
target: app-dev
secrets:
- id: composer_auth
src: auth.json
...
The above configuration ensures that two images are going to be built–production and dev. However, it is smart enough to detect whether there is a reference to both of them in the Deployment
type of Kubernetes objects. For example, if you’d like to build and deploy the dev application, then a great skaffold’s functionality is the profiles. Let’s define two profiles: dev and production, while having the production the default one.
# default profile - the production
manifests:
kustomize:
paths:
- deploy/overlays/production
# Profiles definition
profiles:
- name: dev
manifests:
kustomize:
paths:
- deploy/overlays/dev
- name: production
manifests:
kustomize:
paths:
- deploy/overlays/production
This change ensures that Kustomize doesn’t load the dev Shopware manifest and Skaffold won’t build the container image for it.
To build all the defined container images without deploying them to the cluster, run this command:
skaffold build --file-output=.build-artifacts.json
Skaffold will save references to the images in the .build-artifacts.json
. The file will be necessary to deploy and verify commands.
Deploy Shopware to your local Kubernetes Cluster
Once you already know how to build a Docker image with your Shopware application, now let’s get into deploying it to the local cluster with all its dependencies:
- Operators and controllers that create and control workloads such as: HTTP reverse proxy (Ingress Controller), TLS certificates manager, random passwords generator and object storage manager (MinIO).
- Shopware backing services (MariaDB server, Redis, OpenSearch)
- Shopware application server
- Shopware message scheduler and consumer to execute back-end async tasks.
Shopware Kubernetes Cluster Overview
Shopware deployment project structure
The structure implements the bases and overlays concept, which is the recommended structure for projects using Kustomize to manage multiple environments (i.e., dev, test, production). As an argument you provide the path to an overlay (for example–dev). This overlay to includes the base manifests and applies amendments, like adding new environment-specific resources, patching or deleting ones from the bases.
For example, in a dev environment, we want to deploy a dev Shopware container, but we don’t want to have it deployed in production. The dev overlay contains an additional manifest to deploy the Shopware dev container..
Here is how the project is organized:
deploy
├── bases
│ ├── app
│ ├── database
│ ├── opensearch
│ ├── redis
│ └── storage
└── overlays
├── dev
├── production
└── test
deploy/bases/app/app-server.yaml
Deployment of Shopware application server (production mode).deploy/bases/app/ingress.yaml
Creates an ingress object that exposes the application service and the media service with local test domains: http://shopware.test and http://media.test.deploy/bases/app/task-scheduler.yaml
Deployment of thetask-scheduler
, responsible for sending tasks to the message queue.deploy/bases/app/message-consumer.yaml
Deployment of themessage-consumer
, responsible for executing tasks pulled from the message queue.deploy/bases/app/shopware-init.yaml
Initialization job that must be run on every new rollout, ideally before deploying theapp-server
,task-scheduler
and themessage-consumer
. It’s performs the following operations:- Plugins and apps installation or updates
- Migrations
- Theme compilation
- Assets installation
- Shopware basic (fresh) installation if the database is empty in a newly created Kubernetes cluster.
deploy/bases/database
Manifests to deploy a MariaDB database server.deploy/bases/opensearch
Manifests to deploy a basic setup of single-node OpenSearch.deploy/bases/redis
Manifests to deploy a basic setup of single-node Redis. It creates a separate instance to store cache and a separate for customer sessions.deploy/bases/storage/minio-tenant.yaml
Deploys a MinIO storage instance (a tenant). The MinIO operator must be already installed in the cluster.deploy/overlays/dev/app/app-server-dev.yaml
Deployment of Shopware container (dev mode with Xdebug enabled).deploy/overlays/dev
Source directory to process dev application deployment.deploy/overlays/production
Source directory to process deployments for a production cluster.test/e2e/jobs/e2e.yaml
A Job manifest to run end-to-end tests based on the Shopware Playwright Acceptance Tests framework.test/e2e/jobs/step-summary.yaml
A Job manifest to print the end-to-end tests summary for GitHub Actions.test/integration/jobs/integration-tests.yaml
A Job manifest to run integration tests.
To verify that the manifest files can be processed correctly, execute the following two commands:
kustomize build deploy/overlays/dev
kustomize build deploy/overlays/production
These commands should either output all Kubernetes objects that will be applied to the cluster, or print an error message if there are any issues.
How to configure Shopware deployments with Skaffold
The deployment configuration in the skaffold.yaml
consists of three parts:
manifests
: location of the manifest files–deploy/overlays/dev
for development,deploy/overlays/production
for production,deploy
: Deployment settings.deploy.helm
: The list of Helm charts to be installed.
Skaffold, on deployment, first processes all the Helm charts and then applies the manifests.
...
manifests:
kustomize:
paths:
- deploy/overlays/dev
deploy:
statusCheck: true
# Fail deployment if it doesn't stabilize within 20 minutes.
# A cold start may be long in case when many plugins need to be activated.
statusCheckDeadlineSeconds: 1200
# Pods may initially start and crash but eventually self-heal.
# This parameter allows for failures until
# the deadline, set by the statusCheckDeadlineSeconds above.
tolerateFailuresUntilDeadline: true
kubectl:
defaultNamespace: shopware
helm:
# Install all required operators
releases:
- name: kubernetes-secret-generator
repo: https://helm.mittwald.de
remoteChart: kubernetes-secret-generator
namespace: secret-generator
createNamespace: true
wait: true
version: 3.4.0
setValues:
image: # overrides default values
registry: ghcr.io
# This image isn't official but supports both amd64 and arm64.
repository: belodetek/kubernetes-secret-generator
tag: 0.0.4
...
Starting Shopware deployment
skaffold deploy --build-artifacts=.build-artifacts.json
The input file is a result of previously built container images. However, you can build and deploy with a single command:
How to access Shopware on local Kubernetes via web browser
On a Linux host it should all work out of the box. The storefront should be accessible via
http://shopware.test
.
On macOS and Windows, additionally, you need to open a tunnel that exposes the LoadBalancer
services.
minikube tunnel
Demostore homepage after starting the project and opening the tunnel
Running Integration tests inside cluster
Integration testing in the CI on a lightweight Kubernetes cluster is generally a good practice, when your production environment is also Kubernetes-based. This approach increases reliability of the tests, and includes Kubernetes-specific dependencies such as ConfigMaps
, Secrets
or ServiceAccounts.
To run integration tests with a simple command skaffold verify
, you need a custom job that will provide references to Shopware configuration stored the ConfigMaps
and Secrets.
It’s not possible to set it up in the skaffold.yaml
file directly. Here’s an example of such a job manifest.
# ./test/integration/jobs/integration-tests.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: integration-tests
spec:
backoffLimit: 4
template:
spec:
containers:
- name: integration-tests
# Use dev image that has phpunit preinstalled.
image: kiweeteam/franken-shopware-dev
envFrom:
- configMapRef:
name: shopware-app-config
- secretRef:
name: shopware-app-config
- secretRef:
name: database-credentials
env:
- name: APP_ENV
value: "test"
- name: SHOPWARE_ADMINISTRATION_PATH_NAME
value: "admin_$(SHOPWARE_ADMINISTRATION_PATH_SUFFIX)"
- name: DATABASE_URL
value: "mysql://$(MYSQL_USER):$(MYSQL_PASSWORD)@$(MYSQL_HOST):$(MYSQL_PORT)/$(MYSQL_DATABASE)"
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
key: CONSOLE_ACCESS_KEY
name: shopware-s3
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
key: CONSOLE_SECRET_KEY
name: shopware-s3
restartPolicy: Never
Next, add a new verify
job in the skaffold.yaml
. The following example can be used to run the demo plugin’s integration tests.
# ./skaffold.yaml
...
verify:
...
- name: integration-tests
container:
name: integration-tests
image: kiweeteam/franken-shopware-dev
command:
- /bin/sh
args:
- -c
- |
set -e
echo "Running integration tests of shopware-demo-plugin..."
/shopware-bin php-cli vendor/bin/phpunit -c vendor/kiwee/shopware-demo-plugin/phpunit.xml
echo "Integration tests finished."
executionMode:
kubernetesCluster:
jobManifestPath: test/integration/jobs/integration-tests.yaml
...
Use the .build-artifacts.json
file referencing the recently built container images generated by the skaffold build
command.
skaffold verify -a .build-artifacts.json
Alternatively, you can run integration tests with kubectl
. However, the dev-Shopware application Pod (app-server-dev
) must be already running before starting the tests.
kubectl exec service/app-server-dev -n shopware -- /shopware-bin php-cli
vendor/bin/phpunit -c vendor/kiwee/shopware-demo-plugin/phpunit.xml
If there are more than one app-server-dev
Pods, it doesn’t matter which one is selected to run the tests on, kubectl exec service
will pick one Pod associated with the service.
Running Playwright end-to-end tests inside cluster
The same principle as with integration testing applies to end-to-end testing. Because they involve not only all services but the user interface as well, being able to simulate a production-like environment.
Here is how you can use the Shopware Acceptance Test Suite to launch tests in the cluster by simply running the skaffold verify
command.
Let’s first create a simple test scenario that validates adding a product to cart.
// ./test/e2e/tests/CartTest.spec.ts
import {test, expect} from '../BaseTest';
test('Product detail test scenario',
async ({
ShopCustomer,
StorefrontProductDetail,
ProductData,
AddProductToCart
}) => {
await ShopCustomer.goesTo(StorefrontProductDetail.url(ProductData));
await ShopCustomer.attemptsTo(AddProductToCart(ProductData));
await ShopCustomer.expects(StorefrontProductDetail.offCanvasSummaryTotalPrice).toHaveText('€10.00*');
});
The playwright configuration needs to include the storefront URL, administration URL and the admin credentials and would be as the following:
// ./test/e2e/playwright.config.ts
import { defineConfig, devices } from '@playwright/test';
process.env['SHOPWARE_ADMIN_USERNAME'] = process.env['SHOPWARE_ADMIN_USERNAME'] || 'admin';
process.env['SHOPWARE_ADMIN_PASSWORD'] = process.env['SHOPWARE_ADMIN_PASSWORD'] || 'shopware';
const defaultAppUrl = 'http://shopware.test/';
process.env['APP_URL'] = process.env['APP_URL'] ?? defaultAppUrl;
// make sure APP_URL ends with a slash
process.env['APP_URL'] = (process.env['APP_URL'] ?? '').replace(/\/+$/, '') + '/';
if (process.env['ADMIN_URL']) {
process.env['ADMIN_URL'] = process.env['ADMIN_URL'].replace(/\/+$/, '') + '/';
} else {
process.env['ADMIN_URL'] = process.env['APP_URL'] + 'admin_' + process.env['SHOPWARE_ADMINISTRATION_PATH_SUFFIX'] + '/';
}
export default defineConfig({
testDir: './tests',
fullyParallel: true,
forbidOnly: !!process.env.CI,
timeout: 60000,
expect: {
timeout: 10_000,
},
retries: 0,
workers: process.env.CI ? 2 : 1,
reporter: process.env.CI ? [
['list'],
['@estruyf/github-actions-reporter', <GitHubActionOptions>{
title: 'E2E Test Results',
useDetails: true,
showError: true,
debug: true
}]
] : 'html',
use: {
baseURL: process.env['APP_URL'],
trace: 'retain-on-failure',
video: 'off',
},
projects: [
{
name: 'chromium',
use: { ...devices['Desktop Chrome'] },
}
],
});
Finally, the BaseTest.ts
file is necessary to export everything from the acceptance-test-suite
framework, and the package.json
to install all dependencies.
// ./test/e2e/BaseTest.ts
export * from '@shopware-ag/acceptance-test-suite';
// ./test/e2e/package.json
{
"dependencies": {
"@shopware-ag/acceptance-test-suite": "^11.6.0",
"playwright": "^1.10.0",
"@estruyf/github-actions-reporter": "^1.10.0"
}
}
The Dockerfile
to build the test Docker image need to include the following elements:
- Installation of required libraries and dependencies.
- Installation of Playwright.
- Copying the test code into the container image.
Then, run the build.
skaffold build --file-output=.build-artifacts.json
The --file-output
option generates a file that lists the recently built container images—their names and tags, needed for future reference.
A default test job Skaffold creates doesn’t have references to the Shopware secrets in the cluster. It allows, however, to define a custom job manifest where you can add the references to the secrets which are required for the tests.
# ./test/e2e/jobs/e2e.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: e2e
spec:
template:
spec:
...
containers:
- name: e2e
image: kiweeteam/shopware-e2e
# include shopware config and secrets for the e2e tests
envFrom:
- configMapRef:
name: shopware-app-config
- secretRef:
name: shopware-app-config
...
# ./skaffold.yaml
verify:
...
name: e2e
image: kiweeteam/shopware-e2e
...
executionMode:
kubernetesCluster:
jobManifestPath: test/e2e/jobs/e2e.yaml
Once everything is set up and Shopware has already been deployed into the cluster, run the verify
command, giving the file with the recently built test Docker image reference as the -a
parameter.
skaffold verify -a .build-artifacts.json
Skaffold will execute the end-to-end test job and display test results.
How to debug Shopware in Kubernetes cluster?
A comprehensive guide on how to debug an application on Kubernetes is available in the Kubernetes documentation. Let’s focus, however, on solutions specific to Shopware.
How to inspect Shopware workloads?
kubectl describe deployment/app-server -n shopware
This command displays complete information about the current status of Shopware deployment, which includes:
- Deployment strategy.
- Number of Shopware Pod replicas (desired, available, unavailable, total).
- Init-containers list and the commands these containers execute.
- Shopware application container details such as image version, container ports exposed to the cluster, command and arguments, resource requirements and limits (CPU, memory), config maps and secrets used, and persistent volumes mounted.
- A history of statuses and error messages.
This command will give you valuable insights into the cause when a deployment fails.
The describe
command can be used on any resource to provide detailed information about its current status. This includes Pods, Services, Ingresses, and others
# inspect shopware service
kubectl describe service/app-server -n shopware
# inspect all shopware Pods selected by label app=shopware
kubectl describe pods -n shopware -l app=shopware
# inspect all ingresses in the shopware namespace
kubectl describe ingress -n shopwae
Code debugging with Xdebug in Kubernetes cluster
To be able to follow a complete stack trace, you need to copy the vendor
dependencies and the index.php
script from a dev Pod to your local filesystem.
# Get Shopware application dev Pod name.
APP_DEV=$(kubectl get Pod -n shopware --no-headers -l app=shopware-dev -o=custom-columns=NAME:.metadata.name) && \
# Copy vendor/ and public/ from the app-server-dev container to your local project directory.
kubectl cp shopware/${APP_DEV}:/app/vendor ./vendor -c app-server-dev && \
kubectl cp shopware/${APP_DEV}:/app/public ./public -c app-server-dev
The next step is to enable Xdebug to establish a DBGP connection with your IDE. Use the ktunnel tool to open a reverse tunnel between the Shopware app-server-dev Pod running in the Kubernetes cluster and your IDE.
tunnel inject deployment -n shopware app-server-dev 9003
This command will inject a sidecar container that opens a tunnel on port 9003 that is used by Xdebug.
Xdebug connection with code editor.
Finally, expose the app-server-dev
Pod by port-forward
command that forwards the local port 8000. So, the dev server is accessible via http://localhost:8000. Ensure this URL is also added to the sales channel you want to debug.
kubectl port-forward deploy/app-server-dev -n shopware 8000:8000
Add
http://localhost:8000
URL to the sales channel for debugging.
The homepage in dev mode once port 8000 is forwarded to the app-server-dev
Configure PhpStorm / IntelliJ IDE for debug on Kubernetes
Open: Settings > Languages & Frameworks > PHP > Servers
and add a new localhost
server if it doesn’t already exist. As the host, type: localhost
, and port: 8000
. Activate the checkbox Use path mappings(...)
. Finally, map the custom
directory to /app/custom
, public
to /app/public
, vendor
to /app/vendor
.
After saving, you’re ready to debug! Click Start Listening for PHP Connections
. Open a page you want to debug in the browser.
PHP server configuration screen in PhpStorm and IntelliJ
Button to start listening for PHP debug connections
Example debug screen on IntelliJ / PhpStorm
Configure Visual Studio Code for debug on Kubernetes
First off, ensure that you have already installed the PHP Debug extension from Xdebug.
Next, open or create the launch.json
file, which is located in your project directory under the .vscode
folder. Update the Listen for Xdebug
settings by adding or modifying path mappings between the local workspace and the container. It should look as follows:
{
"name": "Listen for Xdebug",
"type": "php",
"request": "launch",
"port": 9003,
"pathMappings": {
"/app/custom": "${workspaceRoot}/custom",
"/app/public": "${workspaceRoot}/public",
"/app/vendor": "${workspaceRoot}/vendor"
}
},
Voilà! You’re all set for debugging. Just click Debug
and Listen for Xdebug
buttons, open the page for debugging in the browser.
How to start debugging with Xdebug in Visual Studio Code
Example debug screen on Visual Studio Code
Summary
A cloud-native Shopware application needs to be stateless to become horizontally scalable. This can be achieved by:
- Configuring shared cache storage such as Redis instead of the default local filesystem-based cache.
- Configuring shared storage for media and private files, ideally an S3-compatible object storage.
- Configuring logs to be streamed into the stdout and stderr to aggregate them further and store them in a central index like Loki or Elasticsearch or SaaS solutions like New Relic or Datadog.
To deploy, test, and debug your Shopware into your local Kubernetes cluster:
- Containerize your Shopware application: Consider using FrankenPHP as an alternative to PHP-FPM with Apache or NGINX. FrankenPHP allows you to build an entire application into a standalone binary, enabling you to use minimal, and therefore more secure, base container images.
- Build and test: You can leverage Skaffold–a handy tool to build, deploy, and test Shopware on Kubernetes with simple commands:
skaffold build
to build container images,skaffold run
to start the whole Shopware stack locally, andskaffold verify
to run the tests. - Debug: To debug Shopware in the cluster, deploy Shopware with the enabled Xdebug PHP extension. Then, use the
ktunnel
tool allowing Xdebug connections with your code editor.
I encourage you to contribute to the Shopware-Kube project with your ideas, suggestions, or issues you discovered. We at Kiwee will continue to adapt and test FrankenPHP as an application server for Shopware.
If you need consultancy for your cloud-based Shopware infrastructure or would like to build your own stack, don't hesitate to contact us! We'll be happy to support you.