top of page

K8s Anti-Design Pattern Series - Kubernetes deployments with latest-tagged containers - Blog #18

Updated: Apr 12, 2023





Summary


This should not be shocking to anyone who has worked with containers. Since "latest" is simply the word of the tag and not an indication of when the image was created, labeling Docker images with the "latest" tag is a poor practice in and of itself. If a tag is not specified when referring to a container picture, the default is latest. As bad as it is to not know what has been distributed in your Kubernetes cluster, it is even worse to use the "latest" tag.


Applying this distribution will result in the loss of all knowledge regarding the deployed container tag. Since tags within containers can be changed, the "latest" label does not necessarily imply anything. Possible creation time for this container picture ranges from 3 minutes to 3 months. If you want to know which version is in the image, you'll have to either download it and examine it locally or dig through your CI system's logs. When combined with an always pull strategy, the "latest" label becomes extremely perilous. Assume that Kubernetes has determined that restarting your node is necessary to restore its health after it has become unresponsive (remember that is why you are using Kubernetes in the first place).



apiVersion: apps/v1

kind: Deployment

metadata:

name: my-bad-deployment

spec:

template:

metadata:

labels:

app: my-badly-deployed-app

spec:

containers:

- name: dont-do-this

image: docker.io/myusername/my-app:latest



DEPLOYMENT STRATEGY


If your pull strategy permits it, Kubernetes will reschedule the pod and re-pull the "latest" image from your Docker registry. If the "latest" tag has been updated, then the version in this pod will be distinct from the versions in the other pods. This is usually not what you want to happen. It also goes without saying that if you use this method of "deployment," killing pods manually and waiting for them to fetch again your "latest" image is a surefire way to ensure success. Appropriate Kubernetes deployments should adhere to a well-defined tagging approach. As long as you have a plan, the specifics of that plan are less essential.



IMPROVEMENT IDEAS


  • A Git hash is not something that a layperson would be able to read, so this is simple to execute but could be unnecessary.

  • Implementing semantic versioning with application identifiers (e.g., docker.io/my-username/my-app:v1.0.1). We suggest this approach because of the many benefits it offers to everyone involved, not just programmers.

  • Implementing a system of identifiers that represent a sequence of numbers, such as a build number or a build date/time. The style is difficult to work with, but it may be easily adopted by older programmes.





You need only concur that container tags are immutable in order to proceed. Only one copy of a Docker image with the version number 2.0.5 should be made and then distributed to different environments.


You should be able to tell if a release is using the image with the tag v2.0.5.you can fetch this image locally and rest assured that it is identical to the one being used by the cluster; you can also quickly discover the image's original Git hash. You are resting on a ticking time bomb if your deployment processes rely on "latest" tags.








Continuous Blog Series :



Recent Posts

See All
bottom of page