Kubernetes Overview

 Kubernetes background

Kubernetes is an open-source container orchestration system for automating the deployment, scaling, and management of containerized applications. It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF). Kubernetes provides a platform-agnostic way to manage, and schedule containerized workloads, making it easier to run and manage microservices-based applications in a distributed environment, including:

 Automatic scaling: Kubernetes can automatically scale the number of replicas of a containerized application based on demand, ensuring that the application can handle increased traffic without manual intervention.

 Self-healing: Kubernetes can automatically detect and replace failed containers, ensuring that the application remains available even in the face of failures.

 Service discovery and load balancing: Kubernetes can automatically discover and route traffic to the correct containerized service, making it easy to build and run microservices-based applications.

 Storage orchestration: Kubernetes can automatically provision, scale, and manage storage for containerized applications.

 Rolling updates: Kubernetes can automatically roll out updates to containerized applications with zero-downtime, making it easy to deploy new versions of an application without impacting its availability. 

In summary, Kubernetes provides a high-level abstraction for running and managing containerized applications across multiple hosts, making it easier to build, test, and deploy microservices-based applications in a distributed environment.


Kubernetes and Docker

The first thing to say about Kubernetes and Docker is that they’re complimentary technologies. For example, it’s very popular to deploy Kubernetes with Docker as the container runtime. This means Kubernetes orchestrates one or more hosts that run containers, and Docker is the technology that starts, stops, and otherwise manages the containers. In this model, Docker is a lower-level technology that is orchestrated and managed by Kubernetes. 

To make this all happen, we start out with our app, package it up and give it to the cluster (Kubernetes). The cluster is made up of one or more masters, and a bunch of nodes. The masters are in-charge of the cluster and make all the decisions about which nodes to schedule application services on. They also monitor the cluster, implement changes, and respond to events. For this reason, we often refer to the master as the control plane. Then the nodes are where our application services run. They also report back to the masters and watch for changes to the work they’ve been scheduled.

With Deployments, we start out with our application code and we containerize it. Then we define it as a Deployment via a YAML or JSON manifest file. This manifest file tells Kubernetes two important things: 

  • What our app should look like – what images to use, ports to expose, networks to join, how to perform update etc. 
  • How many replicas of each part of the app to run (scale).

 Then we give the file to the Kubernetes master which takes care of deploying it on the cluster. But it doesn’t stop there. Kubernetes is constantly monitoring the Deployment to make sure it is running exactly as requested. If something isn’t as it should be, Kubernetes tries to it. 


Docker is a platform for developing, shipping, and running containerized applications, while Kubernetes is a platform for automating the deployment, scaling, and management of containerized applications. 

Docker provides a simple way to package and distribute applications as containers, which are lightweight, portable, and self-sufficient units of software that include all the necessary dependencies to run an application.

 Kubernetes, on the other hand, provides a powerful orchestration system for managing and scheduling containerized workloads, making it easier to run and manage microservices-based applications in a distributed environment. It provides a number of features such as automatic scaling, self-healing, service discovery, load balancing, storage orchestration, and rolling updates, which are not available in Docker.

 In short, Docker provides a way to package and distribute applications as containers, while Kubernetes provides a way to manage and orchestrate those containers in a production environment. While it's possible to run containerized applications using only Docker, the use of Kubernetes will make it easier to manage and scale the application in a production environment.


Masters and nodes

A Kubernetes cluster is made up of masters and nodes.

Masters (control plane) 

A Kubernetes master is a collection of small services that make up the control plane of the cluster. The simplest (and most common) setups run all the master services on a single host. However, multi-master HA is becoming more and more popular, and is a must have for production environments. Looking further into the future, we might see the individual services comprise the control plane split-out and distributed across the cluster - a distributed control plane. 

It’s also considered a good practice not to run application workloads on the master. This allows the master to concentrate entirely on looking after the state of the cluster.

The API server 

The API Server (apiserver) is the frontend into the Kubernetes control plane. It exposes a RESTful API that preferentially consumes JSON. We POST manifest files to it, these get validated, and the work they define gets deployed to the cluster. You can think of the API server as the brains of the cluster.


The cluster store

 If the API Server is the brains of the cluster, the cluster store is its memory. The config and state of the cluster gets persistently stored in the cluster store, which is the only stateful component of the cluster and is vital to its operation - no cluster store, no cluster! 

The cluster store is based on etcd, the popular distributed, consistent and watchable key-value store. As it is the single source of truth for the cluster, you should take care to protect it and provide adequate ways to recover it if things go wrong.


The controller manager 

The controller manager (kube-controller-manager) is currently a bit of a monolith - it implements a few features and functions that’ll probably get split out and made pluggable in the future. Things like the node controller, endpoints controller, namespace controller etc. They tend to sit in loops and watch for changes – the aim of the game is to make sure the current state of the cluster matches the desired state (more on this shortly).


The scheduler

 The scheduler (kube-scheduler) watches for new workloads/Pods and assigns them to nodes. Behind the scenes, it does a lot of related tasks such as evaluating affinity and anti-affinity, constraints, and resource management. evaluating affinity and anti-affinity, constraints, and resource management.





The Kubernetes control plane which controls the entire cluster. A cluster must have at least one master node; there may be two or more for redundancy. Components of the master node include the API Server, etcd (a database holding the cluster state), Controller Manager, and Scheduler.



Nodes

A Kubernetes node is a machine that runs containerized workloads as part of a Kubernetes cluster. A node can be a physical machine or a virtual machine, and can be hosted on-premises or in the cloud.

A Kubernetes node is a single machine in a cluster that serves as an abstraction. Instead of managing specific physical or virtual machines, you can treat each node as pooled CPU and RAM resources on which you can run containerized workloads. When an application is deployed to the cluster, Kubernetes distributes the work across the nodes. Workloads can be moved seamlessly between nodes in the cluster.


The nodes are a bit simpler than masters. The only things that we care about are the kubelet, the container runtime, and the kube-proxy.



Kubelet

 First and foremost is the kubelet. This is the main Kubernetes agent that runs on all cluster nodes. In fact, it’s fair to say that the kubelet is the node. You install the kubelet on a Linux host and it registers the host with the cluster as a node. It then watches the API server for new work assignments. Any time it sees one, it carries out the task and maintains a reporting channel back to the master.


If the kubelet can’t run a particular work task, it reports back to the master and lets the control plane decide what actions to take. For example, if a Pod fails on a node, the kubelet is not responsible for restarting it or finding another node to run it on. It simply reports back to the master. The master then decides what to do.


Container runtime

 The Kubelet needs to work with a container runtime to do all the container management stuff – things like pulling images and starting and stopping containers. More often than not, the container runtime that Kubernetes uses is Docker. In the case of Docker, Kubernetes talks natively to the Docker Remote API.


Kube-proxy 

The last piece of the puzzle is the kube-proxy. This is like the network brains of the node. For one thing, it makes sure that every Pod gets its own unique IP address. It also does lightweight load-balancing on the node.

In Kubernetes, the two concepts work like this: 

1. We declare the desired state of our application (microservice) in a manifest file.
2. We POST it the API server. 
3. Kubernetes stores this in the cluster store as the application’s desired state.
4. Kubernetes deploys the application on the cluster.
5. Kubernetes implements watch loops to make sure the cluster doesn’t vary from desired state.

Manifest files are either YAML or JSON, and they tell Kubernetes how we want our application to look. We call this is the desired state. It includes things like which image to use, how many replicas to have, which network to operate on, and how to perform updates. and how to perform updates.

The most common way of doing this is with the kubectl command. This sends the manifest to port 443 on the master.

Kubernetes inspects the manifest, identifies which controller to send it to (e.g. the Deployments controller) and records the config in the cluster store as part of the cluster’s overall desired state. Once this is done, the workload gets issued to nodes in the cluster. This includes the hard work of pulling images, starting containers, and building networks.

 Finally, Kubernetes sets up background reconciliation loops that constantly monitor the state of the cluster. If the current state of the cluster varies from the desired state Kubernetes will try and rectify it.

Assume we have an app with a desired state that includes 10 replicas of a web frontend Pod. If a node that was running two replicas dies, the current state will be reduced to 8 replicas, but the desired state will still be 10. This will be picked up by a reconciliation loop and Kubernetes will schedule two new replicas on other nodes in the cluster.

The same thing will happen if we intentionally scale the desired number of replicas up or down. We could even change the image we want the web frontend to use. For example, if the app is currently using the v2.00 image, and we update the desired state to use the v2.01 image, Kubernetes will go through the process of updating all replicas so that they are using the new image.


Pods


A Kubernetes pod is the smallest unit of management in a Kubernetes cluster. A pod includes one or more containers, and operators can attach additional resources to a pod, such as storage volumes. Pods are stateless by design, meaning they are dispensable and replaced by an identical unit if one fails. A pod has its own IP, allowing pods to communicate with other pods on the same node or other nodes.

It’s true that Kubernetes runs containerized apps. But those containers always run inside of Pods! You cannot run a container directly on a Kubernetes cluster. However, it’s a bit more complicated than that. The simplest model is to run a single container inside of a Pod, but there are advanced use-cases where you can run multiple containers inside of a single Pod.

To deploy a Pod to a Kubernetes cluster we define it in a manifest file and then POST that manifest file to the API server. The master examines it, records it in the cluster store, and the scheduler deploys the Pod to a healthy node with enough available resources. Whether or not the Pod has one or more containers is defined in the manifest file.




Pod anatomy


  Pod is a ring-fenced environment to run containers. The Pod itself doesn’t actually run anything, it’s just a sandbox to run containers in. Keeping it high level, you ring-fence an area of the host OS, build a network stack, create a bunch of kernel namespaces, and run one or more containers in it - that’s a Pod. If you’re running multiple containers in a Pod, they all share the same environment - things like the IPC namespace, shared memory, volumes, network stack etc. As an example, this means that all containers in the same Pod will share the same IP address (the Pod’s IP).


If those containers need to talk to each other (container-to-container within the Pod) they can use the Pods localhost interface.


This means that multi-container Pods are ideal when you have requirements for tightly coupled containers – maybe they need to share memory and storage etc.


Pods are also the minimum unit of scaling in Kubernetes. If you need to scale your app, you do so by adding or removing Pods. You do not scale by adding more of the same containers to an existing Pod! Multi-container Pods are for two complimentary containers that need to be intimate - they are not for scaling.

Each Pod creates its own network namespace - a single IP address, a single range of ports, and a single routing table. This is true even if the Pod is a multi-container Pod - each container in a Pod shares the Pods IP, range of ports, and routing table.


This Pod networking model makes inter-Pod communication really simple. Every Pod in the cluster has its own IP addresses that’s fully routable on the Pod overlay network.




Pod lifecycle 


Pods are mortal. They’re born, they live, and they die. If they die unexpectedly, we don’t bother trying to bring them back to life! Instead, Kubernetes starts another one in its place – but it’s not the same Pod, it’s a shiny new one that just happens to look, smell, and feel exactly like the one that just died.

When their Pods fail, a totally new one (with a new ID and IP address) can pop up somewhere else in the cluster and take its place. 

You define it in a YAML or JSON manifest file. Then you throw that manifest at the API server and the Pod it defines gets scheduled to a healthy node. Once it’s scheduled to a node, it goes into the pending state while the node downloads images and fires up any containers. The Pod remains in this pending state until all of its resources are up and ready. Once everything’s up and ready, the Pod enters the running state. Once it’s completed its task in life, it gets terminated and enters the succeeded state.
When a Pod can’t start, it can remain in the pending state or go to the failed state.




Deploying Pods via ReplicaSets 


A ReplicaSet is a higher-level Kubernetes object that wraps around a Pod and adds features. As the names suggests, they take a Pod template and deploy a desired number of replicas of it. They also instantiate a background reconciliation loop that checks to make sure the right number of replicas are always running – desired state vs actual state.


In Kubernetes, a pod is the smallest deployable unit and represents one or more containers that are deployed together on the same host. The containers within a pod share the same network namespace and can communicate with each other using localhost. 

There are two main types of containers that can be run within a pod:

 Single container pods: These pods run a single container. This is the most common type of pod and is used to run simple applications that do not require multiple containers. 

Multi-container pods: These pods run multiple containers that are deployed together. These pods are used to run applications that have multiple components or need to share resources such as volumes or ports. 

Additionally, there are some special types of containers that are used in specific scenarios: 

Init containers: These containers run before the other containers in a pod and are used to perform specific tasks such as configuring the pod's environment or waiting for external services to be available.

Sidecar containers: These containers run alongside the other containers in a pod and are used to provide additional functionality such as logging, monitoring, or service discovery.

Ephemeral containers: These are short-lived containers that run for a specific task and then exit. They are used to perform one-off tasks such as database migrations or running a command.

Job containers: These containers run to completion and are used for batch processing workloads. In summary, pods in Kubernetes can contain one or more containers, and there are different types of containers that can be run within a pod, each serving a specific purpose.


In Kubernetes, pods can communicate with resources outside of the cluster using bridge and tunneling techniques. 

Bridge: A bridge is a software or hardware component that connects two separate networks together, allowing them to communicate with each other. In the context of Kubernetes, a bridge can be used to connect the cluster's virtual network to an external network, such as a physical network or a VPN. This allows pods within the cluster to communicate with resources outside of the cluster, such as databases or other services.

Tunneling: Tunneling is a technique for encapsulating one type of network protocol within another. In the context of Kubernetes, tunneling can be used to establish a secure connection between the cluster and an external network, allowing pods within the cluster to communicate with resources outside of the cluster over an encrypted connection. Common tunneling protocols used in Kubernetes include VPN and IPsec. Both bridge and tunneling techniques allow pods to access resources outside of the cluster, but they have different use cases and trade-offs. Bridges are generally simpler to set up and manage, but they may not provide the same level of security as tunneling. Tunneling can provide a secure connection to external resources, but it may be more complex to set up and manage and may have higher overhead. 

In summary, Bridge and Tunneling are two different techniques that can be used to enable pod communication with resources outside of the cluster, each with their own advantages and disadvantages.

Services

We’ve just learned that Pods are mortal and can die. If they are deployed via ReplicaSets or Deployments, when they fail, they get replaced with new Pods somewhere else in the cluster - these Pods have totally different IPs! This also happens when we scale an app - the new Pods all arrive with their own new IPs. happens when we scale an app - the new Pods all arrive with their own new IPs. It also happens when performing rolling updates - the process of replacing old Pods with new Pods results in a lot of IP churn.

The moral of this story is that we can’t rely on Pod IPs. But this is a problem. Assume we’ve got a microservice app with a persistent storage backend that other parts of the app use to store and retrieve data. How will this work if we can’t rely on the IP addresses of the backend Pods?

This is where Services come in to play. Services provide a reliable networking endpoint for a set of Pods.





A Service is a fully-fledged object in the Kubernetes API just like Pods, ReplicaSets, and Deployments. They provide stable DNS, IP addresses, and support TCP and UDP (TCP by default). They stable DNS, IP addresses, and support TCP and UDP (TCP by default). They also perform simple randomized load-balancing across Pods, though more advanced load balancing algorithms may be supported in the future. This adds up to a situation where Pods can come and go, and the Service automatically updates and continues to provide that stable networking endpoint. 

The same applies if we scale the number of Pods - all the new Pods, with the new IPs, get seamlessly added to the Service and load-balancing keeps working.




Connecting Pods to Services 

The way that a Service knows which Pods to load-balance across is via labels. These Pods are loosely associated with the service because they share the same labels.



For a Service to match a set of Pods, and therefore provide stable networking and load-balance, it only needs to match some of the Pods labels. However, for a Pod to match a Service, the Pod must match all of the values in the Service’s label selector.


For an example where the Service does not match any of the Pods. This is because the Service is selecting on two labels, but the Pods only have one of them.


An example that does work. It works because the Service is selecting on two labels and the Pods have both. It doesn’t matter that the Pods have additional labels that the Service is not selecting on. The Service selector is looking for Pods with two labels, it finds them, and ignores the fact that the Pods have additional labels - all that is important is that the Pods have the labels the Service is looking for.





One final thing about Services. They only send traffic to healthy Pods. This means if a Pod is failing health-checks, it will not receive traffic form the Service. Services bring stable IP addresses and DNS names to the unstable world of Pods!

Each Service that is created automatically gets an associated Endpoint object. This Endpoint object is a dynamic list of all of the Pods that match the Service’s label selector. Kubernetes is constantly evaluating the Service’s label selector against the current list of Pods in the cluster. Any new Pods that match the selector get added to the Endpoint object, and any Pods that disappear get removed. This added to the Endpoint object, and any Pods that disappear get removed. This ensures the Service is kept up-to-date as Pods come and go.

Common Service types

The four common ServiceTypes include:




ClusterIP: This is the default option, and gives the Service a stable IP address internally within the cluster. It will not make the Service available outside of the cluster.  A Cluster service is the default Kubernetes service. It gives you a service inside your cluster that other apps inside your cluster can access.




NodePort: This builds on top of ClusterIP and adds a cluster-wide TCP or UDP port. It makes the Service available outside of the cluster on this port. This exposes the service on each Node’s IP at a static port. A Nodeport service is the most primitive way to get external traffic directly to your service. NodePort, as the same implies, opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service.




LoadBalancer: This builds on top of NodePort and integrates with cloud native load-balancers.




ExternalName: Maps a service to a predefined externalName field by returning a value for the CNAME record. It acts as an alias for an external service Allows a Service to act as the proxy for an external service

Deployments

 Deployments build on top of ReplicaSets, add a powerful update model, and make versioned rollbacks simple. As a result, they are considered the future of Kubernetes application management.

They build on top of Pods and ReplicaSets by adding a ton of cool stuff like versioning, rolling updates, concurrent releases, and simple rollbacks.


Docker is the container runtime, kubeadm is the tool we’ll use the build the cluster, kubelet is the Kubernetes node agent, kubectl is the standard Kubernetes client, and CNI (Container Network Interface) installs support for CNI networking.

Deployments manage ReplicaSets, and ReplicaSets manage Pods.


Pods and cgroups 


Control Groups (cgroups) are what stop individual containers from consuming all of the available CPU, RAM and IOPS on a node. I suppose we could say that they “police” resource usage. Individual containers have their own cgroup limits.

This means it’s possible for two containers in a single Pod to have their own set of cgroup limits. This is a powerful and flexible model. We could set a cgroup limit on the file sync container so that it had access to less resources than the web service container, and was unable to starve the web service container of CPU and memory.



Labels and Selectors


In Kubernetes, labels and selectors are used to organize and select resources within the cluster.

Labels: Labels are key-value pairs that can be added to Kubernetes resources such as pods, services, and replication controllers. They can be used to organize resources, for example, by environment (production, staging, etc) or by application (frontend, backend, etc). Labels are intended to be used to organize and group resources together, and they are not used to make decisions about how resources should be handled.

Selectors: Selectors are used to select a set of resources based on the labels that have been assigned to them. Selectors can be used to filter resources based on their labels. For example, a selector can be used to select all pods that have a label of "environment=production". Selectors can be used in combination with Kubernetes objects such as Services and Replication Controllers to control which pods are selected by those objects. Both labels and selectors are used in combination to organize resources and select a set of resources for a specific purpose. Labels are used to add metadata to resources, and selectors are used to filter resources based on that metadata.

 In summary, Labels and Selectors are two important concepts in Kubernetes that are used to organize and select resources. Labels are used to add metadata to resources, and selectors are used to filter resources based on that metadata.

Annotations

In Kubernetes, annotations are additional metadata that can be added to resources such as pods, services, and replication controllers. They can be used to store additional information that is not used by Kubernetes itself, but rather by external tools and systems. 

Annotations can be used for:

 Storing information that is used by external tools and systems such as monitoring, logging, and security systems. 

Storing information that is used by developers and operators, for example, to store the name of the developer who deployed a specific version of a service or to store notes about the service's configuration. 

Storing information that is used by custom controllers and operators to make decisions about how to handle specific resources. Annotations are stored as key-value pairs, and the keys and values can be any string. The maximum size of an annotation value is limited to 64KB.

 In summary, Kubernetes annotations are a way to store additional metadata for resources. They are used to store information that is not used by Kubernetes itself but by external tools and systems, developers, and operators. They can also be used by custom controllers and operators.

Note: annotation can't be used with filter as we can use labels.

In summary, Labels are used to organize and group resources together, selectors are used to filter and select resources based on their labels, while annotations are used to store additional information that can be used by external tools, systems, developers and operators.

DaemonSets


ReplicaSets are generally about creating a service (e.g., a web server) with multiple replicas for redundancy. But that is not the only reason you may want to replicate a set of Pods within a cluster. Another reason to replicate a set of Pods is to schedule a single Pod on every node within the cluster. Generally, the motivation for replicating a Pod to every node is to land some sort of agent or daemon on each node, and the Kubernetes object for achieving this is the DaemonSet.


A DaemonSet ensures a copy of a Pod is running across a set of nodes in a Kubernetes cluster. DaemonSets are used to deploy system daemons such as log collectors and monitoring agents, which typically must run on every node. DaemonSets share similar functionality with ReplicaSets; both create Pods that are expected to be long-running services and ensure that the desired state and the observed state of the cluster match.

  • When a new node is added to a Kubernetes cluster, a new pod will be added to that newly attached node. 
  • When a node is removed, the DaemonSet controller ensures that the pod associated with that node is garbage collected. Deleting a DaemonSet will clean up all the pods that DaemonSet has created.

Jobs


So far we have focused on long-running processes such as databases and web applications. These types of workloads run until either they are upgraded or the service is no longer needed. While long-running processes make up the large majority of workloads that run on a Kubernetes cluster, there is often a need to run short-lived, one-off tasks. The Job object is made for handling these types of tasks.

A Job creates Pods that run until successful termination (i.e., exit with 0). In contrast, a regular Pod will continually restart regardless of its exit code. Jobs are useful for things you only want to do once, such as database migrations or batch jobs. If run as a regular Pod, your database migration task would run in a loop, continually repopulating the database after every exit.

The Job object is responsible for creating and managing pods defined in a template in the Job specification. These pods generally run until successful completion. The Job object coordinates running a number of pods in parallel. If the Pod fails before a successful termination, the Job controller will create a new Pod based on the Pod template in the Job specification. Given that Pods have to be scheduled, there is a chance that your Job will not execute if the required resources are not found by the scheduler. Also, due to the nature of distributed systems there is a small chance, during certain failure scenarios, that duplicate pods will be created for a specific task.

Jobs are designed to manage batch-like workloads where work items are processed by one or more Pods. By default each Job runs a single Pod once until successful termination. This Job pattern is defined by two primary attributes of a Job, namely the number of Job completions and the number of Pods to run in parallel. In the case of the “run once until completion” pattern, the completions and parallelism parameters are set to 1.





One Shot 

One-shot Jobs provide a way to run a single Pod once until successful termination. Once a Job is up and running, the Pod backing the Job must be monitored for successful termination. A Job can fail for any number of reasons including an application error, an uncaught exception during runtime, or a node failure before the Job has a chance to complete. In all cases the Job controller is responsible for recreating the Pod until a successful termination occurs.

Parallelism 

Generating keys can be slow. Let’s start a bunch of workers together to make key generation faster. We’re going to use a combination of the completions and parallelism parameters. Our goal is to generate 100 keys by having 10 runs of kuard with each run generating 10 keys. But we don’t want to swamp our cluster, so we’ll limit ourselves to only five pods at a time.

Work Queues

 A common use case for Jobs is to process work from a work queue. In this scenario, some task creates a number of work items and publishes them to a work queue. A worker Job can be run to process each work item until the work queue is empty.

ConfigMaps and Secrets

It is a good practice to make container images as reusable as possible. The same image should be able to be used for development, staging, and production. It is even better if the same image is general purpose enough to be used across applications and services. Testing and versioning get riskier and more complicated if images need to be recreated for each new environment. But then how do we specialize the use of that image at runtime?

This is where ConfigMaps and secrets come into play. ConfigMaps are used to provide configuration information for workloads. This can either be fine-grained information (a short string) or a composite value in the form of a file. Secrets are similar to ConfigMaps but focused on making sensitive information available to the workload. They can be used for things like credentials or TLS certificates.

ConfigMaps

ConfigMaps provide a way to store configuration information and provide it to containers. The key thing is that the ConfigMap is combined with the Pod right before it is run. This means that the container image and the pod definition itself can be reused across many apps by just changing the ConfigMap that is used.

There are three main ways to use a ConfigMap: 

Filesystem

 You can mount a ConfigMap into a Pod. A file is created for each entry based on the key name. The contents of that file are set to the value.

 Environment variable

 A ConfigMap can be used to dynamically set the value of an environment variable.

 Command-line argument 

Kubernetes supports dynamically creating the command line for a container based on ConfigMap values. 

Secrets

While ConfigMaps are great for most configuration data, there is certain data that is extra-sensitive. This can include passwords, security tokens, or other types of private keys. Collectively, we call this type of data “secrets.” Kubernetes has native support for storing and handling this data with care.

Secrets enable container images to be created without bundling sensitive data. This allows containers to remain portable across environments. Secrets are exposed to pods via explicit declaration in pod manifests and the Kubernetes API. In this way the Kubernetes secrets API provides an application-centric mechanism for exposing sensitive configuration information to applications in a way that’s easy to audit and leverages native OS isolation primitives.

Secrets are created using the Kubernetes API or the kubectl command-line tool. Secrets hold one or more data elements as a collection of key/value pairs.

Secret data can be exposed to pods using the secrets volume type. Secrets volumes are managed by the kubelet and are created at pod creation time. Secrets are stored on tmpfs volumes (aka RAM disks) and, as such, are not written to disk on nodes. Each data element of a secret is stored in a separate file under the target mount point specified in the volume mount.


Namespaces

In Kubernetes, namespaces provides a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces. Namespace-based scoping is applicable only for namespaced objects (e.g. Deployments, Services, etc) and not for cluster-wide objects (e.g. StorageClass, Nodes, PersistentVolumes, etc).

Namespaces are a way to organize clusters into virtual sub-clusters.

Kubernetes starts with four initial namespaces:

default
Kubernetes includes this namespace so that you can start using your new cluster without first creating a namespace.
kube-node-lease
This namespace holds Lease objects associated with each node. Node leases allow the kubelet to send heartbeats so that the control plane can detect node failure.
kube-public
This namespace is readable by all clients (including those not authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement.
kube-system
The namespace for objects created by the Kubernetes system.

Namespaces and DNS

When you create a Service, it creates a corresponding DNS entry. This entry is of the form <service-name>.<namespace-name>.svc.cluster.local, which means that if a container only uses <service-name>, it will resolve to the service which is local to a namespace. This is useful for using the same configuration across multiple namespaces such as Development, Staging and Production. If you want to reach across namespaces, you need to use the fully qualified domain name (FQDN).

 Volumes

 A Volume references a storage location. It must have a unique name. It is Attached to a Pod and may or may not be tied to the Pod's lifetime (depending on the Volume type). A Volume Mount references a Volume by name and defines a mountPath. A Volume can be used to hold data and state for Pods and containers.

Volume Types

  • emptyDir - Empty directory for storing "transient"data (shares a Pod's lifetime) useful for sharing files between containers running in a Pod.  Data will lose if Pod gets down.
  • hostPath - Pod mounts into the node's file system. Data will lose if Node gets down.
  • nfs - An NFS (Network File System) share mounted into the Pod.
  • configMap/secret - Special types of volumes that provide a Pod with access to Kubernetes resources
  • persistentVolumeClaim - Provides Pods with amore persistent storage option that is abstracted from the details 
  • Cloud - Cluster-wide storage.









Persistent Volumes

Persistent volume (PV) is a piece of storage provided by an administrator in a Kubernetes cluster with a lifecycle independent from a Pod. When a developer needs persistent storage for an application in the cluster, they request that storage by creating a persistent volume claim (PVC) and then mounting the volume to a path in the pod.




 Once that is done, the pod claims any volume that matches its requirements (such as size, access mode, and so on). An administrator can create multiple PVs with different capacities and configurations. It is up to the developer to provide a PVC for storage, and then Kubernetes matches a suitable PV with the PVC. If there is no PV to match the PVC, the StorageClass dynamically creates a PV and binds it to the PVC. The value specified in the MountOptions key will be used when creating dynamic PVs.
A StorageClass (SC) is a type of storage template that can be used to dynamically provision storage.

Readiness, liveness and Startup probes

Kubernetes uses liveness and readiness probes to monitor the availability of your applications. Each probe serves a different purpose:

A liveness probe monitors the availability of an application while it is running. If a liveness probe fails, Kubernetes will restart your pod. This could be useful to catch deadlocks, infinite loops, or just a "stuck" application. 

A readiness probe monitors when your application becomes available. If a readiness probe fails, Kubernetes will not send any traffic to the unready pods. This is useful if your application has to go through some configuration before it becomes available, or if your application has become overloaded but is recovering from the additional load. By having a readiness probe fail, your application will temporarily not get any more traffic, giving it the ability to recover from the increased load.

A startup probes to know when a container application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, making sure those probes don't interfere with the application startup. This can be used to adopt liveness checks on slow starting containers, avoiding them getting killed by the kubelet before they are up and running.



What is Ingress?

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

Here is a simple example where an Ingress sends all its traffic to one Service:

ingress-diagram

An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.

An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.

Prerequisites

You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.

You may need to deploy an Ingress controller such as ingress-nginx. You can choose from a number of Ingress controllers.

Ideally, all Ingress controllers should fit the reference specification. In reality, the various Ingress controllers operate slightly differently.



n Kubernetes, both ingress and load balancer services are used to route incoming traffic to different services within a cluster, but they serve slightly different purposes: 


Load balancer service: A load balancer service is a Kubernetes object that creates a load balancer in the underlying infrastructure (e.g., cloud provider) and directs incoming traffic to a specific service within the cluster. A load balancer service is typically used to distribute traffic to a service that is running on multiple replicas, to improve the performance and availability of the service.

Ingress: An ingress is a Kubernetes object that allows you to route incoming traffic to different services within a cluster based on the URL path or hostname. It allows you to expose multiple services under a single IP address and domain name. Ingress provides more flexibility and control over how traffic is directed to different services, and can be used for advanced routing, improved security and performance.


 In summary, Load balancer service is a Kubernetes object that creates a load balancer and directs traffic to a specific service, while ingress is a Kubernetes object that allows you to route incoming traffic to different services based on the URL path or hostname, and expose multiple services under a single IP address and domain name.

Performing a Rolling Update

Users expect applications to be available all the time and developers are expected to deploy new versions of them several times a day. In Kubernetes this is done with rolling updates. Rolling updates allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources.

Rolling updates allow the following actions:

  • Promote an application from one environment to another (via container image updates)
  • Rollback to previous versions
  • Continuous Integration and Continuous Delivery of applications with zero downtime

To rollback a rolling update in Kubernetes, you can use the kubectl rollout undo command with the name of your deployment. For example, if your deployment is called nginx-deployment, you can run:

kubectl rollout undo deployment/nginx-deployment



You can also specify a revision number to rollback to a specific version of your deployment. For example, if you want to rollback to revision 2, you can run:

kubectl rollout undo deployment/nginx-deployment --to-revision=2


























Post a Comment

Contact Form