Using Cluster Api to Create Kubernetes Clusters on Azure
In this post let’s look at using CAPI to deploy a Kubernetes cluster in Azure. The end goal is to create a Kubernetes cluster in Azure with three control plane nodes and three worker nodes.
In this post let’s look at using CAPI to deploy a Kubernetes cluster in Azure. The end goal is to create a Kubernetes cluster in Azure with three control plane nodes and three worker nodes.
Cluster API brings declarative, Kubernetes-style APIs to cluster creation, configuration, and management.
I have been part of a couple of build outs where we built Kubernetes clusters to run our cloud workloads. These builds involved deploying AKS clusters using terraform and AzDO. Designing the AKS infrastructure is key to ensure that the cloud workloads running on them can be deployed, secured, and hosted effectively. In this post I am documenting the general steps involved in building out a Kubernetes infrastructure on Azure Kubernetes Service (AKS) using terraform and deploy workloads using Azure devops (AzDO) and Helm charts. ...
Imagine a green, sustainable city, meticulously designed for environmental harmony and efficiency. The city has many distinct localities such as neighborhoods, districts, and even villages with their own identity and cultures. This city boasts an intricate public transport system, with buses, trams, and subways efficiently transporting citizens to their destinations from its various localities. Multiple such cities are connected together in a thriving, fast-paced ecosystem. The cities are also similarly connected in an efficient and sustainable design. Each locality in a city represents a microservice, and each city in this system, a domain operating within the larger ecosystem - the application. In an ideal world, this system not only ensures smooth transit but also maintains each city’s eco-friendly ethos; balancing efficiency with sustainability. But how does this ecosystem manage to keep its vast and varied transport network running so smoothly and eco-consciously, avoiding traffic jams, pollution, and inefficiencies? ...
A single container provides application isolation and mobility. However, a container by itself doesn’t improve the quality of your service—for example, in terms of load balancing or failover. This is where multi-container solutions come into play. However managing a handful of containers is completely different from managing production-scale containers, which may number from hundreds to thousands. To support container management, we need an easy way of deploying and handling these containers at scale. This is where container orchestration. comes into play. ...
Controllers create and manage pods. Controllers respond to pod state and health. Kubernetes lets you assert that resources such as pods are in a certain desired state, with specific versions. Controllers track those resources and attempt to run your software as described. There are a variety of controllers in Kubernetes, primarily ReplicaSets and Deployments. ReplicaSet - A ReplicaSet is responsible for reconciling the desired state at all times. The ReplicaSet is used to define and manage a collection of identical pods that are running on different cluster nodes. A ReplicaSet defines the image are used by the containers in the pod and the number of instances of the pod that will run in the cluster. These properties and the many others are called the desired state. If some Pods in the ReplicaSet crash and terminate, the system will recreate Pods with the original configurations on healthy nodes automatically and keep a certain number of processes continuously running. For e.g if you specified three Pods in a ReplicaSet and one fails, Kubernetes will automatically schedule and run another Pod for you. If elementary conditions are met (for example, enough memory and CPU), Pods associated with a ReplicaSet are guaranteed to run. They provide fault-tolerance, high availability, and self-healing capabilities. ...
Pods are an important feature of Kubernetes. A Pod is the smallest unit of work that Kubernetes manages and is the fundamental unit that the rest of the system is built on. Each pod contains one or more containers. Instead of deploying containers individually, you always deploy and operate on a pod of containers. Pods are always scheduled together (always run on the same machine). A pod is as an atomic unit. Even if a pod does contain multiple containers, they are always run on a single worker node, it never spans multiple worker nodes. All the containers in a pod have the same IP address and port space. They communicate using localhost or standard inter-process communication. All containers in a pod have access to shared local storage on the node hosting the pod. This shared storage is mounted on each container. Pods provide a great solution for managing groups of closely related containers that depend on each other and need to co-operate on the same host to accomplish their purpose. Pods are considered as ephemeral, throwaway entities that can be discarded and replaced at will. Any pod storage is destroyed with its pod. Each pod gets a unique ID (UID), so you can still distinguish between them if necessary. ...