top of page

K8s Design Pattern Series - Fundamental Patterns - Blog #2

Updated: Apr 12, 2023






Introduction


Cloud native application development is a modern approach to building and deploying software that is optimized for the cloud. Cloud native applications are designed to be scalable, elastic, reliable, and predictable. Certain principles can help guide admins or architects when running applications in Kubernetes to accomplish these characteristics.


Architects attempt to build and inculcate best practices in their mind while designing cloud-native applications. Of all the core elements, containerization is one of the most important fundamental pillars. Foundational patterns help us to identify the key characteristics of the containerized applications to fit into cloud native landscape. Let’s deep dive into this design pattern.


  • Health probe

  • Predictable demand

  • Automated placement

  • Declarative deployment

  • Managed lifecycle


Health Probe Pattern


As per Health Probe, each container should implement specific APIs to assist the platform in monitoring and maintaining the application in the healthiest way possible. To be highly automated, a cloud-native application must be highly observable, allowing Kubernetes to detect whether the application is up and running and ready to serve requests. When the liveness probe fails, the container is completely terminated; however, when the readiness probe fails, the container is removed from service.


Check the GitHub repo for more details on the implementation of this design pattern.




Predictable Demands Design


Predictable Demands explains why every container should declare its resource profile and stick to the resource requirements specified. Application resource requirements and runtime parameters/dependencies are vital to the deployment of the cloud native applications. Predictable demands help you to declare the application dependencies. Declaring your requirements is required for Kubernetes to locate the best location for your application within the cluster.






Automated Placements Design


Workload distribution in a multi-node cluster is described in detail in Automated Placement. To accomplish container resource demands and adhere to scheduling standards, the Kubernetes scheduler's primary role is to place new Pods on nodes. This pattern explains how external factors might affect Kubernetes' placement choices and outlines the basic concepts of Kubernetes' scheduling mechanism.







Declarative Deployment Design


Deployment automation is an essential aspect of operating a cloud native app on Kubernetes. Only by relying on the deployment workload to provide declarative updates for pods and replicasets is it able to implement this notion. In a similar vein, containerized programmes must behave like model cloud native citizens for a deployment to succeed.


The ability to reliably start and terminate a group of pods is fundamental to any deployment. As a rule, containers will listen for and respond to sigterm and other lifecycle events, and they will also expose health-check endpoints to let you know whether they booted up properly.







Continuous Blog Series :




48 views0 comments

Recent Posts

See All
bottom of page