top of page

K8s Anti-Design Pattern Series - Kubernetes deployments with latest-tagged containers - Blog #18

Updated: Apr 12





Summary


This should not be shocking to anyone who has worked with containers. Since "latest" is simply the word of the tag and not an indication of when the image was created, labeling Docker images with the "latest" tag is a poor practice in and of itself. If a tag is not specified when referring to a container picture, the default is latest. As bad as it is to not know what has been distributed in your Kubernetes cluster, it is even worse to use the "latest" tag.


Applying this distribution will result in the loss of all knowledge regarding the deployed container tag. Since tags within containers can be changed, the "latest" label does not necessarily imply anything. Possible creation time for this container picture ranges from 3 minutes to 3 months. If you want to know which version is in the image, you'll have to either download it and examine it locally or dig through your CI system's logs. When combined with an always pull strategy, the "latest" label becomes extremely perilous. Assume that Kubernetes has determined that restarting your node is necessary to restore its health after it has become unresponsive (remember that is why you are using Kubernetes in the first place).



apiVersion: apps/v1

kind: Deployment

metadata:

name: my-bad-deployment

spec:

template:

metadata:

labels:

app: my-badly-deployed-app

spec:

containers:

- name: dont-do-this

image: docker.io/myusername/my-app:latest



DEPLOYMENT STRATEGY


If your pull strategy permits it, Kubernetes will reschedule the pod and re-pull the "latest" image from your Docker registry. If the "latest" tag has been updated, then the version in this pod will be distinct from the versions in the other pods. This is usually not what you want to happen. It also goes without saying that if you use this method of "deployment," killing pods manually and waiting for them to fetch again your "latest" image is a surefire way to ensure success. Appropriate Kubernetes deployments should adhere to a well-defined tagging approach. As long as you have a plan, the specifics of that plan are less essential.



IMPROVEMENT IDEAS


  • A Git hash is not something that a layperson would be able to read, so this is simple to execute but could be unnecessary.

  • Implementing semantic versioning with application identifiers (e.g., docker.io/my-username/my-app:v1.0.1). We suggest this approach because of the many benefits it offers to everyone involved, not just programmers.

  • Implementing a system of identifiers that represent a sequence of numbers, such as a build number or a build date/time. The style is difficult to work with, but it may be easily adopted by older programmes.





You need only concur that container tags are immutable in order to proceed. Only one copy of a Docker image with the version number 2.0.5 should be made and then distributed to different environments.


You should be able to tell if a release is using the image with the tag v2.0.5.you can fetch this image locally and rest assured that it is identical to the one being used by the cluster; you can also quickly discover the image's original Git hash. You are resting on a ticking time bomb if your deployment processes rely on "latest" tags.








Continuous Blog Series :

Blog #1 : Kubernetes Design Pattern Series - An Overview

Blog #2 : K8s Design Pattern Series - Fundamental Pattern

Blog #3 : K8s Design Pattern Series - Structural Pattern

Blog #4: K8s Design Pattern Series: Behavioral Patterns

Blog #5: K8s Design Pattern Series: Higher Level Patterns

Blog #6: K8s Design Pattern Series: Summary

Blog #7: K8s Anti-Design Pattern Series

Blog #8: K8s Anti-Design Pattern Series - Putting the Configuration into the Images of the Containers

Blog #9: K8s Anti-Design Pattern Series - Connecting Applications to Kubernetes Features/Services without Justification

Blog #10: K8s Anti-Design Pattern Series - Mixing Infrastructure and Application Deployment

Blog #11: K8s Anti-Design Pattern Series - Deploying without Memory and CPU Limits

Blog #12: K8s Anti-Design Pattern Series - Understanding Health Probes In Kubernetes

Blog #13: K8s Anti-Design Pattern Series - The Pitfall of ignoring Helm in Kubernetes Package Management

Blog #14: K8s Anti-Design Pattern Series - Why Deployment Metrics matter in Kubernetes

Blog #15: K8s Anti-Design Pattern Series - To Kubernetes or not to Kubernetes weighing Pros and Cons

Blog #16:K8s Anti-Design Pattern Series - Connecting Applications to Kubernetes Features/Services

Blog #17: K8s Anti-Design Pattern Series - Manual Kubectl Edit/Patch Deployments

Blog #18: K8s Anti-Design Pattern Series - Kubernetes Deployments with Latest-Tagged Containers

Blog #19: K8s Anti-Design Pattern Series - Kubectl Debugging

Blog #20: K8s Anti-Design Pattern Series - Misunderstanding Kubernetes Network Concepts

Blog #21: K8s Anti-Design Pattern Series - Dynamic Environments in Kubernetes why Fixed Staging is an Anti-Design

Blog #22: K8s Anti-Design Pattern Series - Combining Clusters of Production and Non-Production



4 views0 comments

Recent Posts

See All
bottom of page