Is my application suitable for Kubernetes

Is my application suitable for Kubernetes?

Answering the question in the title requires understanding your application's specific needs and the strengths and potential challenges Kubernetes presents.

In this article, I will dive into Kubernetes fundamentals, assessing application compatibility with Kubernetes, its architectural prerequisites, and the scenarios where another solution might be better suited. Whether you're a startup making foundational infrastructure decisions or a mature business considering a shift, this guide will highlight both the application's architectural and infrastructure requirements to benefit from all that Kubernetes offers.

What Kubernetes is and what it isn’t

Kubernetes is an open-source platform that standardizes and automates containerized applications' deployment, scaling, and management. Its main features include:

  • Rollouts and Rollbacks: it provides features to gradually roll out changes to a service or roll back to a previous version in case issues arise.
  • Service Discovery and Load Balancing: it automatically discovers and routes traffic to the appropriate containers based on the demand or configuration.
  • Scaling: it allows for automatic or manual scaling of services, letting applications handle increased loads.
  • Health Monitoring and Self-healing: it has built-in mechanisms to check the health of services and restart failed containers, ensuring high availability.
  • Configuration and Secret Management: it manages configuration data and secrets without exposing them to unauthorized people or applications.
  • Resource Management and Scheduling: it determines which node should run a given container based on resource availability, constraints, or affinities.
  • Network Configuration: it creates and manages a private network for containers, ensuring they can communicate securely.

Kubernetes is NOT:

  • A virtualization platform: Kubernetes orchestrates containers, not virtual machines.
  • A Platform-as-a-Service (PaaS) per se, however, it works as an engine for many PaaS by various cloud providers.
  • Not cloud-native only: Kubernetes can be used on-premises, in a cloud, or hybrid scenarios.

Advantages of Kubernetes

  • Open source.
  • Platform agnostic.
  • Many distributions are available - crafted for different use-case scenarios, e.g., cloud, on-premises, and edge-computing.
  • Many providers offer managed Kubernetes as PaaS.
  • High-Availability capable.
  • Autoscalable and self-healing capable.
  • Kubernetes ecosystem includes hundreds of tools and extensions, and new ones are actively developed. Many of them are open-source and listed on the CNCF landscape page.

Application requirements to run on Kubernetes

It is often said that an application must follow microservices architecture to become Kubernetes compliant. That's not true. While microservices tend to be less complex in adaptation for cluster deployment, a monolithic application can efficiently work in a Kubernetes cluster if it is containerized and follows the guidelines described in this article.

What does containerizing an application mean?

Containerization is a technology for packaging applications with all the tools and libraries they need to function correctly and running them in isolation from other containers and processes. Container images can be uploaded to a container registry, and then deployed to a cluster or downloaded to a local development environment.

Containers

Twelve-factor App methodology ideal for Kubernetes

Although the Twelve-Factor (also known as 12-Factor) methodology was created several years before containers became a standard, it is an excellent guide to the best practices to follow to deploy and run an application on Kubernetes efficiently. Here is what it means to adopt the 12 factors in relation to containers and Kubernetes:

  1. Codebase: There should be a 1:1 relation between the code base and the application. In other words, the application needs to be built and packed into a container image with all its dependencies and system libraries, pushed into a container images registry, and deployed to a cluster.

CI containers

  1. Dependencies: Most programming languages or application frameworks have a package manager that should be used to download dependencies in a container build process. This also applies to specific system libraries or tools, which can be installed either by using a base container image that already contains them or installed during the build.
  2. Configuration: Application settings that vary between stages or environments must not be hardcoded (e.g., a database hostname). For that purpose, Kubernetes uses ConfigMaps to store text or key-value pairs. The ConfigMaps can be provided to an application either via system environment variables or files.
  3. Backing services: The application must consume backing services in such a way that any replacement of them doesn't require changes to the application code. For example, the database connection URL must not be embedded in the application code but, e.g., in a configuration file. So replacing a self-hosted MySQL server with the one managed in the cloud does not require any code changes. The same applies to any external APIs the application is talking to. Such configuration can be then managed per environment using ConfigMaps, that can be injected into containers as environment variables or mounted as volumes. All passwords or secret keys though, must be stored as Kubernetes secrets and never kept in a code repository in plain text. In the Kubernetes ecosystem, many tools support secret management.
  4. Build, release, run: Build, release, and run should be separate processes. A different tool can even be used for build and deployment. The first factor already describes a build example. For managing deployments, I recommend using the GitOps approach, where the cluster maintains the application's state declared in the cluster state code repository.

GitOps Diagram

  1. Processes: An application should run one or multiple stateless processes. All persistent data must be stored in backing services. That is - persisted data - to a dedicated database, shared cache - to fast storage (e.g., Redis or Memcache), media files - files storage (e.g., S3 compatible, NFS or Ceph).
  2. Port binding: Depending on the use case, an application should run in an app server that is exposed either to the public internet or available internally within the cluster.

Ports exposure

  1. Concurrency: An application should work in a multi-process model so Kubernetes can scale it automatically depending on available resources and the actual demand. It is essential that the application starts up quickly to ensure efficient autoscaling.
  2. Disposability: An expected or unexpected application shutdown
    shouldn't cause any damage or side effects. Graceful shutdowns allow
    for uninterrupted availability regardless of whether the application
    is being scaled down.
  3. Dev/Prod Parity: A development version of an application, including its backing services, should be as close to production as possible. Thanks to containers, it is much less complex to set up than installing the whole software stack on a local computer.
  4. Logs: Storing logs in their default locations when an application runs at scale makes any analysis or debugging almost impossible. Therefore, all logs should be routed to the standard output streams (stdout and stderr). It allows tools like Fluentd or Promtail to collect all logs for further processing.
  5. Admin processes: Administrative processes that need to be run occasionally to keep the application in the desired state must not be handled by the application server itself. Doing so may lead to duplication of operations and race condition issues. Instead, such processes should be run as a Kubernetes CronJob or Job using the same version of the application container image.

Continuous Integration and Continuous Delivery (CI/CD)

CI/CD processes are not strictly required to deploy an application to Kubernetes. Still, they offer significant advantages for streamlining and automating the delivery process and preventing untested versions of an application from being deployed.

Backward compatibility of your application

Backward-incompatible changes should be introduced gradually and broken down into two or more consecutive versions. For example, when releasing a feature update that depends on a new data source (e.g., a different table in the database), the old data source must not be removed in the same release version. This allows zero downtime during rolling or canary deployment, in which one part of the users get the new version and the other part the previous version for some amount of time.

Scenarios when Kubernetes isn’t a good fit

Small scale and infrastructure cost

For a simple web application or a backend service that doesn't need any scalability, Kubernetes will be overly complex. The Kubernetes control plane may use more resources than the application itself. For such a use case, simple managed hosting or a serverless platform will be much more affordable and quicker to set up.

No DevOps-qualified person or team

Expert knowledge is mandatory to configure a Kubernetes environment securely, effectively and to set up application deployments. Therefore, an allocated budget is needed to hire such a person or delegate the job to an external agency like Kiwee.

Managed or unmanaged Kubernetes variant

Managed Kubernetes means that the cloud platform fully manages the so-called Control Plane. The Control Plane is a term for the software components in charge of keeping a Kubernetes cluster and all workloads running in the cluster operational. Additionally, some providers offer automatic updates of the cluster nodes. Although it is an additional cost, it saves a lot of DevOps engineers' time. Thus, it is largely a cost-optimal solution for the majority of use cases.

Summary

Kubernetes is definitely worth considering when your application follows microservices architecture or if you have a scalable, containerized monolithic application that has implemented the twelve-factor methodology.

If your application is small-scale and uses little resources or scalability isn't the priority, Kubernetes is unlikely to be a good fit.

Migrating a legacy application to Kubernetes can be considered, but it is often a complex process that requires thorough application modernization. At Kiwee, we begin this process with a series of workshops during which we explore every aspect of the application and outline its architecture. This allows us to plan work to deliver modernized functionalities in an iterative manner.

In the case of eCommerce projects based on the Magento or Shopware platforms, we already have a pre-configured solution to deploy these platforms to the cloud on Kubernetes.

Modernization ultimately results in lower maintenance costs thanks to modern technologies and automation. Developers are also more excited to work on new things with new technologies!

FacebookTwitterPinterest