We discussed why you should not store configuration inside a container and why a container should not know anything about the cluster it is running on in the previous section. We can take this a step further by requiring each container to be unaware that it is running within Kubernetes at all. Unless you are developing a cluster-managing application, you should not tamper with the Kubernetes API or other external services that are assumed to be inside the cluster. This is a common scenario when teams adopt Kubernetes and fail to isolate their application from the cluster. Some classic examples include applications such as:
Expect a specific volume configuration when sharing data with other pods.
Expect specific service/DNS naming to be set up by Kubernetes networking or assume the presence of specific open ports
collect data from Kubernetes labels and annotations
Inquire within their own pod for information (e.g to see what IP address they have)
In order to function properly, an init or sidecar container is required. even on personal computers
directly contact other Kubernetes services (e.g. using the vault API to get secrets from a Vault installation that is assumed to also be present on the cluster)
To read data from a local Kubernetes configuration,
use the Kubernetes API directly from within the application.
Of course, if your application is Kubernetes-specific (for example, an autoscaler or operator), it will need to directly access Kubernetes services. However, for the other 99% of standard web applications out there, your application should be completely unaware that it is running within Kubernetes.
The ability to run your application with Docker compose is the litmus test for determining whether your application is tied to Kubernetes or not. If creating a Docker compose file for your app is trivial, it means you're following the 12-factor app principles and your app can be installed on any cluster without any special settings.
It is also critical to comprehend the premise of local Kubernetes testing. There are several options for local Kubernetes deployments available today (minikube, microk8s, kind etc). When you look at these solutions, you might conclude that if you are a developer working on an application that is deployed to Kubernetes, you must also run Kubernetes.
If your app is well-structured, you can skip using Kubernetes to perform integration tests locally. Simply run the tests on a standalone instance of the programme (via Docker or Docker-compose).
Some of your dependencies may be hosted on a different Kubernetes server, and that's fine. However, Kubernetes should not be required to host the application itself during testing.If you prefer, you can use one of the many specialised tools designed specifically for local Kubernetes development, such as Okteto, garden.io, or tilt.dev.
Continuous Blog Series :
Blog #1 : Kubernetes Design Pattern Series - An Overview
Blog #2 : K8s Design Pattern Series - Fundamental Pattern
Blog #3 : K8s Design Pattern Series - Structural Pattern
Blog #4: K8s Design Pattern Series: Behavioral Patterns
Blog #5: K8s Design Pattern Series: Higher Level Patterns
Blog #6: K8s Design Pattern Series: Summary
Blog #7: K8s Anti-Design Pattern Series
Blog #8: K8s Anti-Design Pattern Series - Putting the Configuration into the Images of the Containers
Blog #9: K8s Anti-Design Pattern Series - Connecting Applications to Kubernetes Features/Services without Justification
Blog #10: K8s Anti-Design Pattern Series - Mixing Infrastructure and Application Deployment
Blog #11: K8s Anti-Design Pattern Series - Deploying without Memory and CPU Limits
Blog #12: K8s Anti-Design Pattern Series - Understanding Health Probes In Kubernetes
Blog #13: K8s Anti-Design Pattern Series - The Pitfall of ignoring Helm in Kubernetes Package Management
Blog #14: K8s Anti-Design Pattern Series - Why Deployment Metrics matter in Kubernetes
Blog #15: K8s Anti-Design Pattern Series - To Kubernetes or not to Kubernetes weighing Pros and Cons
Blog #16:K8s Anti-Design Pattern Series - Connecting Applications to Kubernetes Features/Services
Blog #17: K8s Anti-Design Pattern Series - Manual Kubectl Edit/Patch Deployments
Blog #18: K8s Anti-Design Pattern Series - Kubernetes Deployments with Latest-Tagged Containers
Blog #19: K8s Anti-Design Pattern Series - Kubectl Debugging
Blog #20: K8s Anti-Design Pattern Series - Misunderstanding Kubernetes Network Concepts
Blog #21: K8s Anti-Design Pattern Series - Dynamic Environments in Kubernetes why Fixed Staging is an Anti-Design
Blog #22: K8s Anti-Design Pattern Series - Combining Clusters of Production and Non-Production