Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.
Summary
In this episode of the Kubernetes series, host Pavan Elthepu dives into the concept of DaemonSets in Kubernetes. DaemonSets are essential for performing node-specific tasks across an entire cluster, such as collecting logs and metrics. Unlike Deployments or ReplicaSets, DaemonSets guarantee that a pod will run on every node, automatically adjusting as nodes are added or removed from the cluster. The video provides a hands-on example of deploying a Prometheus Node Exporter using a DaemonSet and explains its practical applications in monitoring and maintaining a Kubernetes environment efficiently.
Highlights
DaemonSets are perfect for node-specific tasks like resource monitoring across an entire cluster 🖥️.
They automatically adapt to changes in the cluster, like node additions or deletions, ensuring consistent monitoring 🌟.
A practical example includes deploying Prometheus for metrics collection on all cluster nodes 📈.
DaemonSets contrast significantly with Deployments, which are used for scaling stateless applications 🚀.
Hands-on demonstration of creating a DaemonSet for Prometheus Node Exporter in a Kubernetes cluster 🔧.
Key Takeaways
DaemonSets ensure that a pod runs on every node in the Kubernetes cluster 🌐.
They automatically handle node additions and removals without manual intervention 🤖.
DaemonSets are ideal for tasks like log collection and monitoring agents, ensuring coverage across all nodes 📊.
They contrast with Deployments and ReplicaSets by focusing on one pod per node rather than multiple replicas 🛠️.
Using DaemonSets simplifies complex tasks in dynamic environments, maintaining consistency and coverage throughout 🎯.
Overview
The video begins by addressing the limitations of Deployments and ReplicaSets in ensuring pods run on each node. Deployments require manual intervention when nodes are added, which is suboptimal for tasks that need consistent node-level execution. DaemonSets address this gap by ensuring a pod is always running on every node, adapting dynamically as the cluster changes size.
DaemonSets are especially useful for deploying essential monitoring, logging, and network agents across nodes. They ensure uniformity and facilitate critical operations like log collection and metric gathering, crucial for maintaining robust and transparent cluster management. The video explores practical scenarios, including using DaemonSets for deploying Prometheus node exporters and more.
Through a practical walkthrough, viewers learn how to implement a DaemonSet in a Kubernetes environment. The tutorial includes creating a YAML configuration for a DaemonSet, applying it, and verifying its operation across nodes. This hands-on section illustrates the operational efficiency and deployment simplicity DaemonSets bring to Kubernetes cluster management.
Chapters
00:00 - 00:30: Introduction to DaemonSets This chapter delves into DaemonSets, specifically highlighting their importance within Kubernetes. While deployments and replica sets ensure a specified number of pods are running, they do not guarantee universal node coverage in a cluster. DaemonSets address this gap by ensuring that a specific task, such as log or metric collection, runs on every node in the cluster.
00:30 - 01:00: Need for Node-Specific Tasks In this chapter, the need for node-specific tasks in a dynamically expanding Kubernetes cluster is discussed. It covers how to perform tasks like collecting logs and metrics from each node by utilizing DaemonSets. The chapter includes a complete hands-on section to provide practical experience. It also explores methods to monitor nodes for resource usage, such as checking if they are running out of memory or if the CPU usage is high.
01:00 - 01:30: Issues with Deployments and ReplicaSets The chapter discusses the challenge of ensuring monitoring agents run on every node in a dynamic cluster. It explores the option of using Deployment or ReplicaSet with appropriate node Affinity to manage this, but notes a limitation: deployments cannot automatically run pods on newly added nodes.
01:30 - 02:00: Benefits of DaemonSets DaemonSets are useful for automatically managing the deployment of pods across all nodes in a cluster. Unlike manual processes where replica counts have to be adjusted and changes reapplied when nodes are added or removed, DaemonSets handle this seamlessly: a pod is deployed onto every node in the cluster. When a new node is added, a new pod is automatically created there; conversely, when a node is removed, the associated pod is deleted.
02:00 - 02:30: Use Cases for DaemonSets DaemonSets ensure that a pod runs on each node in a Kubernetes cluster. Unlike Deployments or ReplicaSets, which can have multiple replicas per node, DaemonSets guarantee one pod per node. This is particularly useful for certain applications where ensuring one instance per node is crucial. While Deployments are typically used for stateless services, like front-ends and back-ends, where scaling and updates are important, DaemonSets are used where consistent, node-wide application presence is necessary.
02:30 - 03:00: Demonstration of DaemonSets The chapter 'Demonstration of DaemonSets' explains the use of DaemonSets in Kubernetes. It highlights that DaemonSets ensure that a copy of a pod runs on all or certain nodes to perform cluster-level tasks. Typical use cases include deploying logging agents like Fluentd or Logstash on every node, which ensures that logs are collected from all nodes and centralized for analysis.
03:00 - 04:00: Detailed Explanation of the YAML Manifest The chapter provides a detailed explanation of how YAML manifests are used within Kubernetes clusters. It highlights the role of monitoring agents, such as Prometheus and Datadog, deployed on each node to collect and centralize metrics. The use of DaemonSets for deploying network agents, such as Cilium or DevNet, is discussed to ensure network policies are enforced across all nodes. The chapter suggests moving beyond theoretical discussions to practical applications.
04:00 - 05:00: Creating and Verifying DaemonSets This chapter covers the creation and verification of DaemonSets by using Prometheus node exporter in a minikube cluster. It explains what a DaemonSet is and demonstrates the process practically by employing Prometheus for metric collection on Linux nodes. The chapter includes a walkthrough using VS Code to implement a simple DaemonSet, detailing the steps and providing insights into the functionality of the API.
05:00 - 06:30: Adding and Removing Nodes The chapter 'Adding and Removing Nodes' begins with a discussion on the DaemonSet in Kubernetes. It shows how to retrieve the API resources for a DaemonSet using the command `kubectl get ds` and highlights the API version, short name, and kind associated with a DaemonSet. The transcript further describes checking and confirming these details in a VS Code editor to ensure that the API version and kind are correctly specified in manifest files. Additionally, it touches on setting a name for the DaemonSet and defining match labels under the spec segment for proper configuration.
06:30 - 09:30: Accessing Metrics and Conclusion The chapter discusses demon set configurations to manage pod deployments on nodes. It explains how demon sets provide labels to pods and utilize node selectors or node affinities to ensure that pods run only on specified nodes. This is particularly useful for monitoring specific team nodes. Container configurations are also briefly mentioned.
DaemonSets in Kubernetes Transcription
00:00 - 00:30 hey guys welcome back to the kubernetes series we have learned that deployments and replica sets are used to ensure that a certain number of ports are always running these spots may run on different nodes based on the Affinity that we give but in some cases we need to perform a specific task on every node in the cluster such as collecting logs or metrics for each node in such cases using deployments or replica sets does not guarantee that the Pod runs on every
00:30 - 01:00 single node as nodes may get added to the cluster dynamically in this chapter we will be learning how to perform node specific tasks such as collecting logs and metrics by running a pod on all the nodes in the cluster using demon sets with complete Hands-On so without any further delay let's get started let's say we have a kubernetes cluster with multiple nodes how do we monitor these nodes to see if they are running out of memory or if the CPU is being used to
01:00 - 01:30 the full we must run some agent on each node so that it can collect the metrics from each node and save them to some storage so that we can monitor them but how will we make sure these agents run on every node of the cluster as these nodes are added to the cluster dynamically one option is using deployment or replica set with appropriate Affinity but when a new node is added a deployment cannot run the pod
01:30 - 02:00 on the newly added node automatically we have to increase the replica count and reapply the changes which involves manual intervention that's where demon sets come into the picture with the demon sets we can run a pod on each node of the cluster if a new node is added to the cluster a new pod will be spun up on the newly added node and if the node is removed from the cluster the Pod running on that node will be garbage collected therefore with
02:00 - 02:30 the demon set we make sure that a pod runs on each node always the key difference between a demon set and deployment is that demon set ensures that there is one pod per node whereas deployment or replica set can have multiple replicas per node generally we use deployment for stateless services like front ends and back-ends they are scaling up and down the number of replicas and rolling out updates are more important and we use demon sets
02:30 - 03:00 when a copy of a pod must always run on all or certain nodes to run cluster level tasks let us see some typical use cases of demon sets demand sets are commonly used for deploying logging agents like free and D or log stash on every node in the kubernetes cluster this ensures that locks are collected from all the nodes and centralized in a single location for analysis demon sets are also used for deploying
03:00 - 03:30 monitoring agents like Prometheus or data dog on every node in the kubernetes cluster this ensures that metrics are collected from all the nodes and centralized in a single location for analysis demon sets can also be used for deploying Network agents like celium or devnet on every node in the kubernetes cluster this ensures that Network policies are enforced on all the nodes enough of theory let's see one of these
03:30 - 04:00 use cases in action to understand demonstrate practically let's run Prometheus node exporter on every node in our mini Cube cluster if you don't know what Prometheus node exporter is it is a widely used Daemon for collecting hardware and operating system metrics from Linux nodes in the kubernetes cluster let's go to vs code and create a simple demon set this is the simple Daemon set let's see what we are doing here this is the API
04:00 - 04:30 version for the demon set we can get this with Cube CTL APA resources script demon set as you can see this is the API version this is the short name for the demon set and this is the kind so let's go to vs code and confirm that API version and kind are correct and this is the name of the demon set that we are giving and under the spec we are giving the match labels which are used by the
04:30 - 05:00 demon set if the pods on the Node are already running or not and these are the Pod labels that demon set gives to the parts and under the templates pick we can give the node selector or the node Affinity to instruct demon set to run the pods on only specific nodes this is very helpful if you want to monitor only a specific team nodes and here we are giving the containers just like we gave in the
05:00 - 05:30 deployment and this is the Prometheus node exporter image and these are the arguments that we are passing and this is the container port and please note that we are mounting these two volumes proc fs and CIS FS are two virtual file systems in Linux that allow users and programs to interact with the kernel and access various system information in a convenient way both these virtual file systems are read only and the
05:30 - 06:00 information they provide is generated dynamically by the kernel when accessed because we need system level metrics we need to mount these two folders to our container as these folders are present in every node we are giving this as host path please refer to the kubernetes volumes chapter of the series for a detailed explanation of how volumes work in kubernetes so let's apply this manifest chipctl apply iPhone f
06:00 - 06:30 demon said dot yaml as you can see the demon set is created we can verify that with tube CTL get DS DS is the short name for the demon sets so this is the demon sets that we created so our expectation is that this demon set should create the pots so let's get the pots chip CTL get pots as you can see one pod is running this is because we have only one node in the kubernetes
06:30 - 07:00 cluster we can verify that with Cube CTL get notes as you can see we have only one node in our kubernetes cluster now let's try to add another node to our cluster with mini Cube node add we can verify that with cheap CTL get notes now we have two nodes in our mini Cube cluster so now as per the definition of the Daemon set a new Port should get automatically created on the newly added
07:00 - 07:30 node we can verify that with Cube CTL get pods iPhone o white to see on which node they are running enter as you can see we have two parts now and first pod is running on the first node and the second part is running on the second node like this whenever a new node is added to the cluster a new pod will be created similarly if we try to delete any node the pod which is running on that node by the Daemon set will be
07:30 - 08:00 garbage collected we can see that in action let's try to delete the node tube CTL delete node and node name mini Cube iPhone m02 so I am deleting the second node now the node is deleted so now let's try to list on the pods Cube CTL get parts as you can see now we have only one part running because we have only one node in the cluster now that we have monitoring Parts running let's see if they are
08:00 - 08:30 giving metrics by port forwarding these parts ship CTL code name and these parts are running on 9100 port so let's go to the browser and access localhost colon 9100 slash Matrix so these are the metrics that are given by our Prometheus node export report so this is the total memory available on the system and this is the total free memory available on the system this way
08:30 - 09:00 all the node exporter pods created by demon set pull all the Telemetry that Linux gives by default from every node if a new node is added Daemon set runs the same pod on that new node and we get the metrics of that node as well we can give this metrics to Prometheus to scrape which can be visualized in grafana stay tuned for a video on the Prometheus and grafana when a kubernetes cluster is created the Q proxy demon set is created by default we can verify that
09:00 - 09:30 by listing the demon sets in all namespaces so this is the Q proxy demon set running in the cube system namespace we can verify if this demon set is running the words or Not by listing the pods Cube CTL get pods iPhone n Cube system as you can see one Cube proxy pod is running in the cube system namespace which is created by the cube proxy Daemon set this Q proxy is a
09:30 - 10:00 component of the kubernetes cluster responsible for implementing the kubernetes service abstraction to proxy pod runs on every node and maintains Network rules on each node to allow network communication to pods and services running on that node whenever we delete a demon set all parts managed by it are automatically deleted you can verify that with Cube CTL get odds as you can see there are no pods running however if we don't want to delete the
10:00 - 10:30 ports that are managed by demon set we can give the Cascade is equal to false while deleting the demon sets this way whenever we delete the demon sets the ports managed by the demon sets will not be deleted that's it for this video I'm sure that you learned about kubernetes demon sets their use cases and how to create and delete a demon set in kubernetes if you have any questions or comments please leave them in the comment section below my name is thank you very much for watching this video if
10:30 - 11:00 you liked it please share it with your friends and do not forget to subscribe to my channel do not miss any updates