top of page

K8s Anti-Design Pattern Series - Misunderstanding Kubernetes network concepts - Blog #20

Updated: Apr 12, 2023







Summary


The days when a singular load balancer could handle all of your application's needs are long gone. It is your responsibility to study and understand the fundamental concepts of Kubernetes' new networking model. Learn the basics of load balancers, clusterIPs, nodeports, and entry at the very least (and how they differ). We've seen companies go all out with an ingress controller when a basic load balancer would do, and we've seen companies build multiple load balancers when a single one would do, wasting money with their cloud service provider.





Service Offerings

One of the most daunting challenges for newcomers to Kubernetes networking is making sense of the available service choices. You should be aware of the consequences of the different types of services, as ClusterIP services are local to the cluster, NodePorts can be either internal or external, and load balancers are external to the cluster.


Of course, this is only for moving data within your network. Additionally, you should observe how the internal communication of the cluster functions. An operational Kubernetes cluster requires careful management of DNS, security certificates, and virtual services. You should also learn about service meshes and the issues they address. Our position is not that every cluster needs its own service network. Still, it's important to know how it operates and why you might require it.


Awareness

One could reasonably contend that it is not necessary for a developer to understand these networking principles in order to deploy an application. Currently, there isn't a layer of abstraction for Kubernetes that would make it easier on coders to work with. Even if you're not a network administrator, you should still be aware of how users are connecting to your programme. Users may experience a delay of up to 500 milliseconds when loading a website if the request must make 5 hops between pods, nodes, and services, each of which may incur a latency of 100 milliseconds. You need to be conscious of this so that you can optimize response times by focusing on the right places to make changes. As a coder, you should also be familiar with the inner workings of kubectl proxy and when to employ it.







Continuous Blog Series :


Recent Posts

See All
bottom of page