Mastering Kubernetes Logs: ELK & Filebeat

Kubernetes Logging using ELK Stack and Filebeat | Setup ELK Stack and Filebeat for Kubernetes logs

Estimated read time: 1:20

    Learn to use AI like a Pro

    Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo
    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo

    Summary

    This comprehensive guide by DevOps Hint walks you through setting up the ELK stack and Filebeat for logging in Kubernetes environments. ELK, comprising Elasticsearch, Logstash, and Kibana, is a renowned solution for collecting, analyzing, and visualizing log data. When used with Filebeat, it forms a robust log management system. The video covers prerequisites like having an AWS account, Ubuntu instance, and Kubernetes tools (MiniKube, kubectl, Helm). It provides step-by-step instructions to install necessary dependencies, set up the infrastructure, and deploy ELK and Filebeat on a Kubernetes cluster. The setup allows monitoring and troubleshooting Kubernetes applications efficiently, ensuring seamless operations of your clusters.

      Highlights

      • Setting up ELK and Filebeat for Kubernetes is a game changer! 🤯
      • Make sure your AWS and Ubuntu instances are ready to go. 🖥️
      • Docker installation is crucial; don't miss this step! 🐳
      • Verify MiniKube and kubectl installations before proceeding. 🔍
      • Helm is your friend for package management in Kubernetes. 🤝

      Key Takeaways

      • ELK stack and Filebeat form a powerful combo for Kubernetes logging. 🚀
      • Ensure you have the essential AWS and Kubernetes setup before starting. ☁️
      • Follow step-by-step installation to avoid hiccups. 🔧
      • Use Helm for managing Kubernetes packages efficiently. 📦
      • Access Kibana to visualize log data and enhance monitoring. 📊

      Overview

      Dive into the detailed process of setting up the ELK stack and Filebeat to enhance your Kubernetes logging capabilities. The ELK stack—comprising Elasticsearch, Logstash, and Kibana—is highly effective for log collection, analysis, and visualization. Coupled with Filebeat, it becomes an indispensable tool for any DevOps enthusiast looking to manage logs seamlessly.

        The video covers everything from setting up your AWS environment with an Ubuntu instance to installing Docker, MiniKube, kubectl, and Helm. It emphasizes the importance of having a solid foundation before diving into the deployment process. This structured approach helps avoid common pitfalls and ensures a smooth installation experience.

          Once the groundwork is laid, you'll proceed to deploy ELK and Filebeat on your Kubernetes cluster. The setup includes configuring Elasticsearch, Filebeat, Logstash, and Kibana, each playing a key role in log management. With this setup, you'll be able to monitor your Kubernetes applications effectively, troubleshoot issues swiftly, and ensure a robust and operationally efficient environment.

            Chapters

            • 00:00 - 00:30: Introduction The chapter titled 'Introduction' begins with a warm welcome to the audience, setting the stage for the tutorial. It introduces the topic of setting up the ELK stack along with Filebeat specifically for Kubernetes logging. The ELK stack, consisting of Elasticsearch, Logstash, and Kibana, is highlighted as a popular solution for collecting, analyzing, and visualizing log data. The introduction also mentions Filebeat as a powerful tool that complements the ELK stack.
            • 00:30 - 01:30: Prerequisites and Initial Setup The chapter 'Prerequisites and Initial Setup' focuses on managing logs from a Kubernetes application by setting up the Elastic Stack (LK stack) and Filebeat on a Kubernetes cluster. It outlines necessary prerequisites such as having an AWS account with specific EC2 instances (T3 X large) up and running.
            • 01:30 - 03:30: Installing Docker and Preparing the Environment The chapter outlines the steps for setting up a Docker environment, starting with updating the package list using the sudo AP update command. It assumes that readers have MiniKube, Cube C, and H installed, as well as a basic understanding of Kubernetes.
            • 03:30 - 06:30: Setting up Minikube and kubectl This chapter guides the reader through the initial steps of setting up Kubernetes. It begins with updating the package list and then proceeds to install essential tools such as Curl and apt-transport-https. These tools are crucial for downloading and accessing resources securely and efficiently while setting up Minikube and kubectl.
            • 06:30 - 10:00: Installing and Setting up Helm In this chapter titled 'Installing and Setting up Helm', the focus is on preparing the environment for Helm by first installing Docker. This preliminary step highlights the dependency and necessity of Docker in managing and deploying applications with Helm.
            • 10:00 - 18:30: Deploying the ELK Stack and Filebeat The chapter 'Deploying the ELK Stack and Filebeat' involves steps for configuring Docker to work seamlessly without requiring administrative rights for each command. It begins by installing Docker and adding the current user to the Docker group, allowing Docker commands to be run without 'sudo'. The chapter proceeds with adjusting permissions for Docker, preparing the environment for subsequent steps in deploying the ELK Stack and integrating Filebeat.
            • 18:30 - 25:00: Accessing the ELK Stack The chapter discusses the preliminary steps to access the ELK (Elasticsearch, Logstash, Kibana) stack, beginning with a system check for virtualization support. If the system supports it, the next step involves installing KVM (Kernel-based Virtual Machine) and its related tools.
            • 25:00 - 27:00: Conclusion The conclusion chapter typically wraps up the key points discussed in a book or document. In this chapter, the primary themes, arguments, or findings are summarized. Any final thoughts or implications are presented, providing closure to the reader. As no substantial content is provided (indicated by just 'e e'), it's challenging to provide a detailed summary. However, possibly it might aim to reflect on the main ideas discussed earlier in the work or to encourage readers to apply the insights gained.

            Kubernetes Logging using ELK Stack and Filebeat | Setup ELK Stack and Filebeat for Kubernetes logs Transcription

            • 00:00 - 00:30 hello everyone and welcome back to our channel so today we will learn how to set up elk stack and file bid for kubernetes login the elk stack is nothing but elastic search log stash and kibana which is a popular solution for collecting analyzing and visualizing the log data and when they are combined with file bit it becomes a powerful to for
            • 00:30 - 01:00 managing logs from kubernetes application so today we will see the process of setting up LK stack and file bit on kubernetes cluster using H so first let's see the prequest sites you should have an AWS account with open to 24.4 is2 instance up and running and for this particular practical we will use T3 X large instance
            • 01:00 - 01:30 type then you should have mini Cube Cube C and H installed in your system and also basic knowledge of kubernetes so our first step is set up over to E2 instance so first let's update the package list using sudo AP update command
            • 01:30 - 02:00 now our package list is updated let's install the essential tools like Cur Wate AP transport htps so let's install them
            • 02:00 - 02:30 next we will install docker
            • 02:30 - 03:00 now our Docker is also installed so next we will add the current user to the docker Group which will allow the user to run Docker commands without sud sudo then adjust the permissions for the docker
            • 03:00 - 03:30 socket check if the system supports virtualization or not then we will install the KVM and related tools
            • 03:30 - 04:00 e e
            • 04:00 - 04:30 now the KVM and the related tools are
            • 04:30 - 05:00 also installed so next we will add the user to virtualization groups after this let's reload the group
            • 05:00 - 05:30 now next step is install mini Cube and Cube C so first let's download the latest mini Cube binary after downloading it
            • 05:30 - 06:00 let's install it at user local bin directory this will make it available system wide now to verify its installation let's check the Mini version so here you can see our mini
            • 06:00 - 06:30 cube is installed and you can see its version which is 134.0 next we will download the latest version of cube CTL
            • 06:30 - 07:00 then make the cube C binary [Music] executable to user local bin directory
            • 07:00 - 07:30 installation by running the CU C version command so let's verify so our Cube C is also installed now now next step is start the mini Cube so here we will start mini cube with four C and 8,192 memory allocated to it so let's start the m
            • 07:30 - 08:00 here we are using the docker as the driver will take a minute so just wait a moment e
            • 08:00 - 08:30 now our is also
            • 08:30 - 09:00 started so let's check if it's running or Not by checking the mini Cube status
            • 09:00 - 09:30 so here as you can see our mini cube is running successfully now our next step is installing the helm so Helm is a package manager for kubernetes so we also need it so let's install it first we will download the helm
            • 09:30 - 10:00 after this we will change its permissions then install the h our Helm is also installed
            • 10:00 - 10:30 now so let's verify it by checking the version so hel installed next we will add the elastic Helm Repository
            • 10:30 - 11:00 so repositor is added then we will update the repository this will face the latest charts so now it's updated our next step is deploy the elk stack and file bit
            • 11:00 - 11:30 so first create a elastic search values. ml file this file configures the elastic search resources and Affinity so let's copy this command and paste it we are using Nano text editor here now let's add this code into our
            • 11:30 - 12:00 file so here you can see it's defining the resource request and limit for the elastic search deployment it's requesting the 200 micore of CPU and 200 MB of memory
            • 12:00 - 12:30 and the limit is set to maximum One Core of CPU and maum to GB of memory then it's a anti Affinity set to soft which will allow it will allow the elastic search
            • 12:30 - 13:00 part to run on the same node if necessary so let's save the file
            • 13:00 - 13:30 run the elastic search we will deploy elastic search using elastic search values. file as you can see our realistic search is now deployed next we will create a file bit do sorry file bit values. AML file
            • 13:30 - 14:00 this file configures file BD to collect kubernetes logs so let's create a file let's add this score into our file so here the file BD do inputs specifies the container LW to
            • 14:00 - 14:30 monitor we are also given path to the locks here and in processor the add kubernetes metadata interchange the locks with kubernetes sport details like host node name and output do lock stch forwards the locks to lock stash so let's save this file
            • 14:30 - 15:00 now let's deploy the file BD so file B is also deployed successfully next we will create a log stash val. ml
            • 15:00 - 15:30 file copy this code and paste it in our file so here you can see the analment variables
            • 15:30 - 16:00 this security references the elastic search credentials stored in kubernetes secret named elastic search Master credentials as you can see here and then Maps the secret key username and password elastic search username and its password also
            • 16:00 - 16:30 to environment variables for lock stash to access the elastic search securely then we have written the lock test configuration the HTTP host 0.0.0.0 allows lock St to listen for incoming request on all network interfaces and SPC do monitoring do enabled is set to false this disables the expect monitoring but you should consider enabling it if you want to
            • 16:30 - 17:00 monitoring data send to elastic search or kibana then there is lock stash pipeline okay here the input configures the lock St to listen for events from bits onput 5044 and output since the processed event to elastic
            • 17:00 - 17:30 search it connects to the elastic search Master service on Port 92000 over https the ca. CRT which is CA certificate it uses the mounted CS certificate for secure communication and uses credentials provided by the
            • 17:30 - 18:00 environment variables elastic search username and elastic search password and there is secrets for TLS certificate mounts the secret elastic search Master search into the lock stest container and the certificates are accessible at this part here you can see
            • 18:00 - 18:30 it then there is service configuration written here it exposes the lock St wire kubernetes cluster IP service then inputs the bits input listens to Port 5044 for events from bits
            • 18:30 - 19:00 agent and HTTP interface listens on Port 880 then there is also resource request and limits it reserves 200 M course of CPU and 200 MIB of memory and limit caps at One Core of c and 1.5 J of
            • 19:00 - 19:30 memory so let's save this file now let's deploy the lock stash using lock st. values. file sorry lock stash val. file so lock St is also also
            • 19:30 - 20:00 deployed next we will create a k values. ml file configuration for K unit so here you can see the service type is not port and Port is 5601 here also the minimum 200 M of CPU is
            • 20:00 - 20:30 requested in request and 200 y may be of memory and in limits maximum One Core of CPU is allowed and maximum 2 G of memory allowed so let's save the file now let's deploy K also
            • 20:30 - 21:00 this
            • 21:00 - 21:30 our kibana is also deployed now our next step is accessing the elk
            • 21:30 - 22:00 St so first let's list the services to get the kibana not card details so for that we will use Cube CTL G Services command okay so this is what we need our Iana Iana service type notep and you can see the port 56
            • 22:00 - 22:30 here so let's forward the code 5601 forwarding the given here now let's access it open your web browser so first we will copy the public IP address of our
            • 22:30 - 23:00 ec2 instance so let's copy ENT URL HTTP col SL test your public IP address in col 5601
            • 23:00 - 23:30 here you can see the user interface now next let's try to retrieve the elastic search credentials so that we can login into it so first open the duplicate tab
            • 23:30 - 24:00 so first we will retrieve the username let's copy this command and paste it our username is elastic as you can see then let's retrieve the password okay so this is our password now let's
            • 24:00 - 24:30 login the password here you can see now let's log
            • 24:30 - 25:00 now you can see welcome to elastic now you can start exploring click on explore on my own here so this is the homepage so first scroll down here in the
            • 25:00 - 25:30 observability click on locks so here you can see the log data you can even change the date for last one day or you can even also stream live if you want
            • 25:30 - 26:00 [Music] so these are the logs which are generated so today we have seen how to deploy elk stack and file bit for kubernetes loging they provide the robust logging and virtual visualization which helps in monitoring and troubleshooting the kubernetes
            • 26:00 - 26:30 cluster and by following these steps which we have seen today you can successfully set up the elk stack and file bit for kubernetes login and with elastic search which stores the locks in kibana for visualizing the data and file WID which collects the locks you will have the complete logging solution and this setup will help you to monitor the
            • 26:30 - 27:00 kubernetes applications and identify issues which ensures the smooth operations of your cluster so that's all for today guys thank you