Kubernetes End to End project on EKS | EKS Install and app deploy with Ingress | #abhishekveeramalla
Estimated read time: 1:20
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.
Summary
In this video, Abhishek Veeramalla guides us through an end-to-end Kubernetes project using Amazon's Elastic Kubernetes Service (EKS). Emphasizing the importance of Kubernetes skills for DevOps roles, he provides a detailed, step-by-step tutorial on deploying applications on EKS with Load Balancers and ingress resources. The video also covers setting up a Virtual Private Cloud (VPC) with public and private subnets, showcasing how to seamlessly manage Kubernetes clusters with AWS's managed service offerings. Abhishek offers a comprehensive and practical learning experience, reinforcing key concepts with real-world examples.
Highlights
The significance of Kubernetes skills for DevOps roles emphasized. ๐
Comprehensive tutorial on deploying applications on AWS's EKS. ๐
Detailed setup of VPC with public and private subnets. ๐๏ธ
Explanation of using load balancers and ingress resources on EKS. ๐
Practical insights into managing clusters using AWS's managed services. โ๏ธ
Key Takeaways
Understanding EKS as a managed Kubernetes service by AWS simplifies control plane management. ๐
Deploying applications on EKS involves using services and ingress for external access. ๐
Managed worker nodes can be run on EC2 or AWS Fargate for flexibility. ๐ฅ๏ธ
Setting up IAM roles and policies is crucial for accessing AWS services within Kubernetes pods. ๐
Overview
Abhishek Veeramallaโs latest video in the AWS DevOps Zero to Hero series delves into the intricacies of using Elastic Kubernetes Service (EKS) effectively. Starting with an understanding of why Kubernetes is essential in todayโs DevOps job market, Abhishek transitions into a hands-on demonstration, highlighting the practical steps needed to deploy a real-time application using Kubernetes on EKS.
EKS simplifies the complex setup of Kubernetes by managing the control plane components, leaving users to either utilize AWS Fargate or EC2 instances for their data plane. Abhishek meticulously explains the configuration process needed for deploying applications that are accessible externally, using services and ingress resources, and demonstrates the use of load balancers for traffic routing.
A particularly engaging part of the tutorial covers the utilization of AWSโs IAM for integrating Kubernetes with AWS services, ensuring secure and smooth operations. By the end of the video, viewers gain a robust understanding of deploying and managing Kubernetes clusters in the AWS ecosystem, paving the way to boost their DevOps skills and enhance their resumes.
Chapters
00:00 - 03:00: Introduction and Importance of Kubernetes The chapter titled 'Introduction and Importance of Kubernetes' introduces the viewer to the Elastic Kubernetes Service (EKS). The video, part of the AWS DevOps Zero to Hero series hosted by Abhishek, emphasizes the significance of understanding Kubernetes. It highlights that Kubernetes proficiency is a crucial skill for DevOps engineers, particularly those focusing on AWS, making this video essential for anyone aspiring to excel in the DevOps field.
03:00 - 13:00: Why Choose EKS Over Self-Managed Kubernetes? The chapter emphasizes the importance of learning Amazon EKS, a managed Kubernetes service, over self-managed Kubernetes. It promises to teach viewers not only the concepts of EKS but also practical skills, such as deploying a real-time application on the AWS EKS platform. By the end of the chapter, the application will be successfully deployed and accessible through a public IP or load balancer.
13:00 - 20:00: Introducing EKS and its Managed Services In this chapter, we delve into the deployment of an application on the Kubernetes platform using Amazon EKS (Elastic Kubernetes Service) and its managed services. The process involves creating a Kubernetes service, setting up an Ingress Controller for managing external access, and configuring a load balancer for traffic distribution. Additionally, the setup includes building a Virtual Private Cloud (VPC) with both public and private subnets, ensuring that the application is securely installed within a private subnet. The chapter promises a practical, step-by-step demonstration that enhances one's project work and strengthens resume credentials.
20:00 - 25:00: Comparison of Kubernetes Installation Methods This chapter covers the theory part about Kubernetes installation methods, focusing on Amazon's EKS (Elastic Kubernetes Service). It includes the discussion on why one might choose EKS over other Kubernetes distributions or on-premises solutions. The chapter also aims to address some important interview questions regarding the circumstances and benefits of using EKS.
25:00 - 35:00: Ingress and ALB Controllers Explained The chapter introduces the playlist where Kubernetes concepts are explained comprehensively, spanning about 15 hours. It assumes viewers have a basic understanding of Kubernetes and provides an overview of EKS, which is a managed Kubernetes service. Links to additional resources are mentioned for those who need more foundational knowledge.
35:00 - 45:00: Practical Setup: Tools and Installations Needed This chapter discusses the essential setup for working with Kubernetes clusters. It highlights the two primary components of a Kubernetes cluster: the control plane and the data plane. These are also referred to as Master nodes and worker nodes. The chapter acknowledges the various configurations possible, such as having a single Master and single worker node setup, or a configuration with three Master nodes and three worker nodes.
45:00 - 56:00: Creating and Configuring Your EKS Cluster The chapter discusses setting up a highly available Kubernetes cluster with EKS on AWS. It highlights the importance of having at least a three-node Kubernetes Master architecture for high availability. It further suggests steps for those not using EKS, providing guidance on signing into AWS and initiating the setup manually. The focus is on leveraging AWS capabilities for optimal cluster configuration.
56:00 - 72:00: Deploying Applications on EKS with Fargate The chapter titled 'Deploying Applications on EKS with Fargate' discusses the infrastructure setup required to deploy applications using Amazon Elastic Kubernetes Service (EKS) with AWS Fargate. It details the architectural setup of having three master nodes and three worker nodes, which requires a total of six EC2 instances. The focus is on configuring the master nodes, which are critical for managing the Kubernetes cluster. Important components installed on the masternodes include the API server, etcd, scheduler, controller manager, and cloud control manager. These components are essential for the effective functioning of the Kubernetes control plane. Additional discussions are expected to cover how these components interact with each other and contribute to deploying scalable and efficient applications using EKS and Fargate.
72:00 - 85:00: Ingress Resource and Controller Setup This chapter discusses the components of the master node within a Kubernetes setup. These components, also referred to as control plane components, are responsible for managing the entire Kubernetes system. They act as the user interface, meaning any user wanting to access applications deployed on the worker nodes must first interact with the control plane. Subsequently, requests are forwarded from the control plane to the data plane, where applications are actually deployed.
85:00 - 95:00: Troubleshooting Ingress Controller Issues In the chapter titled 'Troubleshooting Ingress Controller Issues,' the transcript delves into the responsibilities involved in setting up the architecture and systems, particularly when handling this task independently. It outlines the step-by-step actions needed to install various components such as master nodes, worker nodes, container network interfaces (CNI), container runtime, DNS services, and qproxy. The process demonstrates how these installations are integral for successfully joining the cluster.
95:00 - 118:00: Final Steps and Wrap-Up The chapter discusses the process of joining worker nodes to the control plane in a Kubernetes setup. It mentions the tedious and error-prone nature of this task when done manually. The chapter then introduces modern tools like 'cops' as alternatives, which can simplify the process significantly by allowing the use of AWS credentials and VPC settings.
Kubernetes End to End project on EKS | EKS Install and app deploy with Ingress | #abhishekveeramalla Transcription
00:00 - 00:30 hello everyone my name is Abhishek and welcome back to my Channel today is day 22 of AWS devops Zero to Hero series and in this video we will Deep dive into the concept of eks which is elastic kubernetes service this video is a really really important one because kubernetes is the most required skill in a devops engineer's job role these days and if you are an aspiring devops engineer or an aspiring AWS devops
00:30 - 01:00 internet specifically eks is the one that you need to learn for sure and in this video I make sure that you will not just learn the concept of eks but by the end of this video you will also learn how to deploy a practical real-time application on the AWS eks platform and if you see here this application is deployed on a public facing IP or it is accessed through the load balancer so we will deploy this
01:00 - 01:30 application onto the kubernetes platform we will create a service we'll create Ingress Ingress controller and we will access this through a load balancer and of course we will create a VPC with public private subnet and install this inside that private subnet so it's going to be a really cool demo and this can be one single demo that can convert your project or that one convert your resume into a very impressive one so please watch it till the end because we are going to do step by step with a lot of
01:30 - 02:00 configurations so before I go through the demo as usually I'll explain the theory part what is eks y e case and I'm going to cover some important interview questions such as why you need to go with eks when you need to go with eks compared to the other kubernetes distributions or an on-premise kubernetes solution so let's quickly start with it and understand e k s so I will not take much time here because there is already a kubernetes
02:00 - 02:30 playlist on my channel where I've explained all the concepts of kubernetes and it's almost like a 15 hour playlist where you can follow that playlist and learn kubernetes end to end so I'll put the link in the description so that you can go back and watch kubernetes videos if you are not aware of kubernetes but assuming that people who are watching this video they have at least fundamental understanding on kubernetes so eks is basically a managed kubernetes offering like I told you now what does that mean so if you have a kubernetes cluster like
02:30 - 03:00 if you have created a kubernetes cluster in the past you will know that typically in kubernetes cluster there are two components right one is the control plane component and then there is another another thing called as data plane or people can also call it as Master nodes and worker nodes and there can be single Master single worker there can be three Masters three
03:00 - 03:30 workers or depending upon your organizational requirements there is no limit but the ideal thing is if you want to set up a high available kubernetes cluster then go with the three node kubernetes Master architecture right so now let's say you don't use eks and let's say you want to do it all alone but you are already on AWS then typically what you will do is uh you would sign into your AWS platform and uh let's say you want to create
03:30 - 04:00 three Master nodes three work on your architecture you will spin up six ec2 instances three for masternodes three for worker nodes and what you will do is initially you will start with installing the configuration in this masternodes like masternode typically has some important things like uh masternode has an API server in masternode you have etcd you have a scheduler you have a cloud control manager there is controller manager like we discussed in
04:00 - 04:30 the previous videos so these are the master node components and each of them are very much responsible for managing the entire kubernetes right so these are the control plane components that means they are the user facing or you know a user typically if you want to access any application deployed on the worker nodes they have to initially talk to the control plane and from the control plane the request goes to the data plane right or where your applications are actually deployed so now it is your
04:30 - 05:00 responsibility to create this entire architecture and to create this entire thing let's say you are doing it by yourself first you have to install all of these things if you are using cubadium let's say you create all of these things on the master nodes then you go to the worker nodes you install the container network interface the CNA plugins then you install a container runtime then you install a DNS service you install qproxy once you install all of these things then your responsibility is to join
05:00 - 05:30 these worker nodes let's say these are the three worker nodes you have to join the worker nodes to the control plane so this entire thing has to be done by you this is a very tedious process and this is error prone as well now you might be saying that Abhishek but why will I use Cube ADM these days there are so many modern days tools like cops okay I agree using corpse you can reduce this effort significantly you can just provide your AWS credentials you can provide your VPC whatever you would
05:30 - 06:00 like to and cops can create this entire thing for you now let's assume corpse has created three Master note three worker node architecture for you and you shared it with your devops team sorry development team and they started deploying the applications okay after a while what happened is the master node one of the master node went down certificate expired
06:00 - 06:30 API server is down etcd crashed scheduler is not working now these are are some of the problems that I remember at this point of time and even if you install the kubernetes cluster using cops cops can create the kubernetes cluster but you are responsible for handling all of these issues right and these are not small issues to debug what happened to etcd to
06:30 - 07:00 debug why your API server is slow why your API server is not responsive or why you know the scheduler is not responding certificates have expired you have to renew the certificates reattach the regenerated certificates so it will make a devops engineer's life very very hectic right and this is just about one cluster now what if you have hundreds of clusters what if you have thousands of kubernetes clusters as a devops engineer you have to take care of
07:00 - 07:30 all of these things as a cloud engineer you have to manage all of these issues and finally you know you might set up some monitoring you might set up some different configuration rules and each and every day your effort will go on solving these issues and as I told you right from day one whenever we learned AWS Whenever there is an opportunity right Whenever there is manual activity Whenever there is lot of Maintenance activity AWS will come into picture and
07:30 - 08:00 AWS will say that okay don't worry we will create a managed service for you and similarly AWS said that we will create eks for you and now this eks is a managed control plane understand this carefully eks is a managed control plane but it is not managed data plane that means what AWS is saying that okay go to my eks service
08:00 - 08:30 portal on the AWS platform and request for a kubernetes cluster so what e case is going to do is eks is going to give a highly available kubernetes cluster with respect to the control plane components that means everything related to control plane is taken care by the AWS platform right so let's say your API server is going down Etc is crashed you can just ask AWS and AWS will always make sure
08:30 - 09:00 that those kind of issues are not happening now seeing that it manages control plane but it also allows you or it also provides you a very easy way of attaching or integrating your worker nodes with the control plan right of course it's a managed control plane but it will also help you with respect to worker nodes let's say with respect to eks you can request for a kubernetes cluster AWS will install the entire control plane for you it will create the master nodes you will not know where this master
09:00 - 09:30 nodes are right or you know you don't have to bother about these masternodes and when you attach the worker nodes AWS eks will give you two options one is you can use your ec2 instances so you can create the ec2 instances by yourself or AWS is saying that you can go with options like far gate what is Target far gate is a AWS serverless
09:30 - 10:00 compute just like Lambda functions but Lambda functions are defined for a small amount of workloads right so if you want to perform any quick actions then you can use Lambda functions but far gates are defined for running containers so it's a AWS serverless compute that allows you to run containers so if you are using fargates in combination with eks you don't have to worry about anything like eks takes
10:00 - 10:30 care of control plane fargate takes place of your worker node right anything like your worker note going down or your size of the worker node resources eating up your work or not you don't have to bother about anything you can deploy any number of applications and fargate is a service that is highly available eks control plane is highly available that way you can build up a robust and highly stable right you can build up a robust and
10:30 - 11:00 highly stable kubernetes cluster using eks now I'm not saying you should not be using uh eks ec2 instances for worker nodes you can use ec2 instances for worker nodes but then you have to take care of the high availability of it that means you have to install or you have to configure that with the auto scaler right you have to install so you have to configure these ec2 instances with auto scaling so that you know these instances never goes down you
11:00 - 11:30 have to create some thresholds you have to create some monitoring all of that thing will be taken care by you if you are using fargate then you don't have to worry about it but talking about eks again eks is a fully managed control plane of AWS for kubernetes and using this your work will be significantly less you don't have to worry about all the issues that I explained like you don't have to worry about certification sorry certificates being expired you don't have to worry about API server slowness you don't have to worry about uh for example etcd getting crashed if
11:30 - 12:00 any of these things is happening you have AWS SLA with you and you can talk to AWS I mean typically that does not happen if once in a while even if it is happening then you can tell AWS that okay the SLA is breached now what should we do should we go with any compensation or whatever you have the agreement with AWS so that is the reason why why people are moving towards eks right typically if you are on AWS platform there are three ways of installing kubernetes one
12:00 - 12:30 is you can install sorry there are two ways of installing kubernetes on AWS platform one is you can create virtual machines by yourself and you can use tools like cops or cubadium and you can set up kubernetes cluster without eks second thing is you can go with eks so now this eks adoption has increased significantly because people are finding managing kubernetes difficult but saying that I will also cover a lot
12:30 - 13:00 of drawbacks about eks don't worry about it by the end of this video you will also learn about the drawbacks I'm not just going to cover the pros but there are a lot of cons as well and the other way is let's say you are on on premises then on on-premises you can just uh install kubernetes cluster on your data center right on your data center servers so people are moving from on-premises to cloud and from cloud instead of doing everything by yourself you are moving towards the managed Services of course
13:00 - 13:30 there are a lot of customers on the on-premises people are also moving back which we discussed like Cloud repatriation but let's not discuss because it's a very very less number for Now understand that from on-premises previously everything used to be here then people move to the ec2 instances deployed everything by themselves now people are moving towards the managed Services which is eks in our kubernetes topic today now this is about eks and these are the advantages of using eks now let's talk
13:30 - 14:00 about what are we going to learn today are we just going to learn how to create this e case definitely not because that is a very simple thing that even you can learn by yourself but today's video what I'm going to show is create an eks cluster additionally with creating the eks cluster what I'm going to do is let's say this is your kubernetes cluster okay I am going to show you how to deploy an application onto this kubernetes cluster
14:00 - 14:30 that is the eks platform and how to allow your customer or your user let's say we are going to deploy this 2048 application that we have seen right so we are going to reply this one and people are going to use that 2048 application so this is my kubernetes cluster let's say uh there are two worker nodes here and there are three Master nodes M1 M2 M3 so here let's say my application is
14:30 - 15:00 deployed in the pod so I have created a pod.uml and I've deployed the pod.aml using the cube CTL so what happened is my application is deployed in this pod when the application is deployed as a pod using deployment or something what would happen is typically this will just have the cluster IP that means this pod can be accessed anywhere from the cluster it can be accessed from the masternode this one this one this one or even this worker
15:00 - 15:30 node as well let's call it worker one let's call it worker two so it can be accessed from here as well but end user cannot access it and this is not what we want right because my application needs to be accessed from external world so what I'm going to do here is for this firstly we will create a service now again service I'll create a service.aml and cubectl I'm going to deploy the service now service is going to say that I am going to give you three options
15:30 - 16:00 one is you can use the cluster IP model that you are using already or I can allow you to expose this to the node level using the node Port mode or you can expose it to public using the load balancer level I mean the load balancer mode so service has three modes now cluster IP mode like I told you it will be only accessible within the cluster anywhere from this master nodes or worker nodes node Port what will happen is if you convert your
16:00 - 16:30 service to node Port model then this pod or this application can be accessed from any of these IP addresses let's say all of these are easy to instances so people who have access to this ec2 instances IP addresses they can access the server they can access the application but usually what happens is this entire thing like this kubernetes cluster usually will be within a VPC right and this VPC will have a public
16:30 - 17:00 subnet and it will have a private subnet and like we discussed in the previous videos applications are always deployed in the private subnet so again the problem is that if you are exposing this in the node Port mode people can access it but the people who have access to this private subnet only can access it that means anyone let's say there is another easy to instance here or let's say there is another kubernetes cluster here they can access it but people outside cannot access it
17:00 - 17:30 for people outside to access it you need a load balancer right and load balancer should have access to it or you need a public IP address for it like let's say you need a elastic IP address so that is what the load balancer mode provides load balancer mode creates a elastic IP address using which users can access it but the problem is that load balancer mode is very very costly if you have 10 000 applications and if you create load balancer mode for all the
17:30 - 18:00 pods using Services then you will see significantly a high pricing on your AWS environment so the best approach is to go with the Ingress resource the reason why I'm explaining this again I have covered Ingress a lot but for some people Ingress is confusing so that's why I'm explaining Ingress again now let's draw this diagram again now what we are going to do is instead of using this we are going to use Ingress so for purpose of Simplicity let me just draw that node
18:00 - 18:30 let me say that this is the node okay so let me say that this is the worker node and this is the master node for example and this master node let's say has API server and this is the user okay so now what we are going to do here
18:30 - 19:00 is using Ingress like typically there will be a pod here right so the Pod is always there and you have created service also this service will either restrict to Cluster IP mode or we will restrict it to node Port mode you can put any of these things when you are using Ingress Ingress can support both of these things so what you are going to do is additionally for this the devops engineer is going to create a Ingress resource now what this Ingress
19:00 - 19:30 will do Ingress will allow this customer to access the application that is inside the eks cluster or Ingress will basically route the traffic inside the cluster so what devops engineer does is devops engineer will write this ingress.yaml file where in the ingress.yaml file you will write I love this user to access the application on example.com ABC if someone is accessing example.com
19:30 - 20:00 ABC let's say this user is accessing example.com APC forward the request to the service and from the service the request will go to the pod so this configuration you will write in the Ingress resource and you will deploy this Ingress resource using cubectl but just by deploying the Ingress resource nothing will happen right because you have deployed the Ingress resource there has to be someone who have to help you in taking this request from outside to here right typically this user cannot
20:00 - 20:30 access anything inside right because this everything is in the private subnet right this user can access things in the public subnet and in the public subnet if you face a load balancer like let's say you are placing a load balancer in the public subnet then this user can request the public subnet can request the load balancer and from the load balancer request can come inside so there is a concept in kubernetes called as Ingress
20:30 - 21:00 controller and typically all the load balancers support Ingress controllers they have their own Ingress controllers let's say we are talking about nginx nginx has their own Ingress controller we are talking about F5 fi has their own Ingress controller and all of these things are available as Helm charts or plain ml manifest you can download the helm chat and you can deploy this on your kubernetes cluster let's say I have deployed the Ingress controller for AWS ALB
21:00 - 21:30 so as soon as the devops engineer creates an Ingress resource this Ingress controller will watch for the Ingress resource and it will create the ALB for you this is called as ALB controller this Ingress controller is called as ALB controller so this ALB controller will keep watching for the Ingress resources and whenever the Ingress resource is created the ALB controller will create an ALB environment for you or will create an application load balancer and using this application load balancer user can talk to the application load
21:30 - 22:00 balancer and from the application load balancer the request will go to your application now this is same for every Ingress controller let's say you are using nginx instead of this now what will happen is there will be nginx Ingress controller that you will deploy inside the kubernetes cluster and whenever the engineers Ingress controller will find the Ingress resource this nginx Ingress controller will either create a nginx load balancer or if the load balancer is already there it
22:00 - 22:30 will configure the load balancer further rules mentioned in the Ingress resource like I told you this rules should be configured in the Ingress resource so ingress.aml has these things but there has to be a load balancer which has to perform this actions and that load balancer will be your nginx load balancer or application load balancer or FY load balancer depending upon the Ingress controls that you are using so in Market there are hundreds of Ingress controllers you can choose according to your requirement
22:30 - 23:00 right so now what will happen if you create nginx Ingress controller ALB Ingress controller and everything in the same cluster then you can Define in the Ingress itself who should access this Ingress resource there is something called as Ingress class okay let's not deep dive into it but just if someone gets a confusion here I am saying that using Ingress class you can Define who has to watch for this Ingress resource but no I think it is very very clear right devops engineer along with the Pod along
23:00 - 23:30 with the service devops engineer will create Ingress for every resource or every part that needs access from the external World there will be one Ingress controller that will watch for all the Ingress resources and it will configure the load balancer external person will talk to the load balancer from the load balancer which is in the public subnet request will come to the Pod through service to the Pod right so this is the entire configuration of Ingress controllers I
23:30 - 24:00 am explaining this because today's class we are going to deploy an application that should be accessed from the external world right if you still have any confusion go back to my kubernetes playlist I'll put the kubernetes playlist Link in the description which has everything covered in detail for each of the concept we at least spent 40 minutes so please go there and try to understand it now let me go back to the other screen and start with the practicals right so we will start with the practical in a minute
24:00 - 24:30 okay so first of all for today's video because I am going to use a lot of commands and I am going to use terminal a lot I thought I will place each and every command that I'm using in this GitHub repository so for the people who are following our course or our channel for the first time so there is this GitHub repository called AWS devops Zero to Hero and for each day of course if it is not required I haven't created folder but for most of the days we have folder
24:30 - 25:00 Wise MD files folder wise commands whenever it is required it is available in this GitHub repository so all that you need to do is go to this repository called AWS devops0 to Hero link is also available in the description and click on the day 22 folder then you have each and every step that I am going to follow for example during the video let's say you did not follow the command that I used to install eks then you can come here and this is the command you can just
25:00 - 25:30 copy paste it and similarly once you install eks cluster then how to add the plugin so each and every command is available here so please follow this thing if you fail to understand something in the video love I have logged into the AWS console right so this is my AWS console and let me go back to the EK screen so just search for ecash elastic kubernetes service click on this and before I start creating eks let me
25:30 - 26:00 tell you that there are some things that you have to install on your laptop to interact with eks to create eks cluster there are a bunch of things that we are going to do today that is the very first thing is you need to have Cube CTL installed right Cube CTL needs to be installed because using Cube CTL we interact with this kubernetes cluster that we are going to create so if you don't have Cube CTL don't worry again that is available in
26:00 - 26:30 the GitHub page how to uh install cubectl how to install EK CDL there is one document called prerequisites dot MD here you have Cube CTL eks ETL AWS CLI and let's say that you want to try it now what you can simply do is search for cubectl download okay so here you have kubernetes page
26:30 - 27:00 where you can search for install tools and here you have Cube CTL installation for Linux Cube CDL installation for Mac OS and windows so you can choose accordingly and you can install cubectl on my machine I already have Cube CTL so I don't need to do anything like I don't have to install so after Cube CTL the second thing that you are going to install is eks CTL right so again for installing EK CTL just search for EK CTL download
27:00 - 27:30 so click on this button installing and updating eks so here there is an option called installation right so click on this button installation and I am taking it a different time it in a different Tab and here you have for which environment you want to install eks if it is for Linux ins choose this script if it is for Windows choose this script if you are on git bash let's say there are some people on Windows who are
27:30 - 28:00 using git bash then you can use this commands on Docker Macos everything is available right so you can choose accordingly and on my thing again EK CTL is already installed so if I just do EK CTL this should be the output for Mac it is very simple you can just say Brew install or Brew Cask install and your eks is installed but don't worry you can follow the same steps as I have shown and you can install eks Ctrl so and final thing that you have to get ready
28:00 - 28:30 is the AWS CLI right so in multiple videos I have shown you how to install how to configure AWS CLI but just to repeat one last time basically you need to search for AWS CLI AWS CLI command line interface and here go to the documentation and then you have the user guide once you go to the user guide to the left side you have an
28:30 - 29:00 option for installation right so here get started install and update so you can follow this same document and you can proceed with the installation depending upon your flavor and once you have installed the final step is you need to configure it like go to your AWS command line and click on a button uh sorry click on the command called AWS configure so once you install you can verify the installation using AWS version command and okay this is working fine now what
29:00 - 29:30 you need to do is type A command called AWS configure now once you run this it will ask you for access key ID secret access key default region is okay and default output is also okay but you need to have both of these things right so how do you get the access key ID and secret access key you can simply go here click on the down arrow go to security credentials and within the security credentials if
29:30 - 30:00 you scroll down there is a section where you can find the access keys and access key ID let's say you don't have this you will have this button enabled called create access key click on that you will get the access key ID and secret access key so that way you can configure your eks as well sorry AWS CLI as well so now we have everything ready for the purpose of this demo I am using my root account because I have to Grant so many
30:00 - 30:30 permissions if I am not using root account and because many people are doing it for the first time I don't want to confuse by using any IM and granting permission each and every time so for the purpose of demo just use the root account now let's go back to the eks screen foreign so the very first thing that you will do is you will start with creating eks cluster right so to create the eks
30:30 - 31:00 cluster either you can come here and you know you can add cluster click on the create button but you need to provide a lot of parameters and usually in the organizations you don't do this one of the most preferred way in organizations is to create these clusters using eks CTL right what is it eks CTL so EK CTL is a command line utility that is used to manage eks clusters and the command
31:00 - 31:30 that you can use here is basically this one so you can say EK CTL create cluster hyphen hyphen name if you are wondering where I am getting this command from I'm using the same GitHub document that I am shared in the description and I've shown at the beginning of the demo so change the cluster name here from minus minus or hyphen hyphen name let me call this as demo cluster region let's use it as
31:30 - 32:00 U.S east one let's use that region and the reason I am choosing fargate like I told you uh for the eks cluster you can choose data plane or the worker nodes either forget or ec2 instances fargate is better for these purposes unless your organization has any requirement like if your organization has requirement that your work or notes have to be
32:00 - 32:30 with a specific distribution like your worker node should be on rhel distribution only or your work converts have to be on Rel distribution or your worker nodes need to have some specification then go with the ec2 instances if not you can go with the Forget by default there are other things as well like I told you I'll explain the differences as we go through the video but for now I'm using fargate now click on the enter and this is a very good utility that will create the
32:30 - 33:00 entire cluster for you okay there is some error here it says EK CTL demo cluster already exists okay that's fine uh the reason for that is when I was performing a demo at my end I might have created with the same name so let me change it and call it demo cluster one okay click on the enter button and now what happens under the hood or what exactly is happening is this EK CDL is creating
33:00 - 33:30 everything for us now what does everything mean you can search that by click on the add cluster button now if you click on the create if you see first of all it requires some service roles right and then it requires like if you click on the next button it requires some configuration related to networking like you know you need a public subnet you need a private subnet whereas this EK CTL what it does is it creates a public private subnet for us that means
33:30 - 34:00 it creates a within the VPC it creates a public subnet it creates a private subnet and in the private subnet we will place our applications right so all of these things is taken care by our EK CTL utility isn't it good now this will take 15 to 20 minutes in some cases it might be done in 10 minutes in some cases it can be 5 minutes so please be patient here and you know in my case previously when
34:00 - 34:30 I did it it took some 12 to 13 minutes and if your network is good it will take less than that if your network is bad it will take more than that so please be patient here wait for the control plane to become ready okay so now the cluster is created and it took some 16 minutes for me so I have paused the recording and uh yeah so in your case also uh please watch it if it
34:30 - 35:00 takes more time then it's still fine unless it is failing you have to be patient or let's say it fail for some reason there is a good chance that it can fail as well due to some latency issues due to some connection issues so don't worry you need to patiently wait for it now the cluster is created let's go to the console and see if the cluster is reflecting here so search for eks
35:00 - 35:30 perfect now if we look at the cluster here go to the cluster section yes so this is the cluster right demo hyphen cluster hyphen one so the cluster is created this is a cube CTL sorry this is a kubernetes version uh this is a default version that we got with the EK CTL but you can also change this version but for now you can keep it so demo hyphen cluster hyphen one and if you scroll down you can see everything in the console so this is another advantage
35:30 - 36:00 of uh using eks that is there is a resources Tab and if you click on that resources tab you can check the resources that are available on your cluster for example you don't need to go to your command line and type Cube CTL get pods you can just see here what are the pods that are available right you can change the namespace here let's say you want to check in Cube system namespace so here you can see what are the pods that are running let's say you want to search for demon sets right or
36:00 - 36:30 you want to search for the service accounts on this cluster you can search here so this is another advantage of using eks of course this this kind of features are available in most of the kubernetes distributions but with the plain kubernetes unless you install dashboard by yourself you don't get this kind of features right perfect now let's take a look at the overview of the cluster I just want to show you few things here before we jump on to the next things so
36:30 - 37:00 the overuse is that kubernetes cluster is created this is my API server endpoint and you know this is my open ID connector provider URL now what does this mean what is an open ID connector provider basically this kubernetes cluster that we have created you can integrate any identity providers for example in your organization you have an identity provider which is like let's say you
37:00 - 37:30 have OCTA you have key clock or any other identity providers let's say you are not aware of identity provider identity provider is something that for example ldap is an identity provider where you have created all your users for your organization you can create all the users in an identity identity provider and you can attach that your identity provider to multiple other things for example these days when you go to any applications you will see login with Facebook login with uh Google right so what is happening
37:30 - 38:00 there there is an identity broker where you can attach an identity provider so their Facebook your Google they all serve as a social identity providers where you can attach that identity provider to the identity broker in this case what AWS does is AWS allows you to attach any identity provider so that for this e case you can manage one is you can use IM of course if you don't want to use IM or if you
38:00 - 38:30 want to use any other identity providers you can also integrate with that and in this video I am going to integrate IM identity provider the reason is that if you don't integrate IAM identity providers let's say you have created a kubernetes pod okay and this pod wants to talk to S3 bucket or this pod wants to talk to the eks control plane or this part wants to talk to any other AWS service like Cloud watch if you are not integrating the IM
38:30 - 39:00 identity provider how will you give access to this pod right usually in AWS what we do is whenever a pod sorry whenever a AWS service wants to talk to other AWS service or whenever a resource in AWS you have created an instance in ec2 you have created an application in ec2 if this application wants to talk to S3 bucket we need an IM role right similarly whenever you create kubernetes pods you can attach or you can integrate
39:00 - 39:30 that IAM roles with your kubernetes service accounts so that you can talk to any other AWS services and then if you scroll down uh you have other options here if you scroll up the resources I've just explained then there is compute in our case we are using fargate if you see here so the far gate instances are available here one thing is we are not using ec2 instances or something if we are using ec2 instances you can see the ec2
39:30 - 40:00 instances or you can create node groups as well right now if I create node groups I can attach some ec2 instances then there is a forget profile this is very very important what this forget profile says is right now the fargate profile is attached to the default and Cube system namespace that means you can only deploy the pods onto these two namespaces if you want to deploy pods to any other namespaces you have to add additional forget profile I am going to
40:00 - 40:30 show you because in the video I am going to use a different namespace and I am going to deploy deploy into a different namespace as well so there you can see how to create fargate profile and how to attach that to this cluster or to this far gate instances perfect then networking we will not change anything and authentication like I told you you will attack you can attach any identity providers here logging if you want to attach the control plane login let's say that you
40:30 - 41:00 want to log all the API server requests then you can click on manage login enable the API server logging and click on the save changes but for now I don't need anything so let me keep it in the next thing that I will do is I want to download the cubeconfig file so that instead of always going through this resource Tab and verifying from here because we are all devops engineers and we are very used to the cube CTL command line for kubernetes so let me get that Cube CTL command line right so to get
41:00 - 41:30 that Cube CTL command line you remember we installed EK CTL right so let me do this command called update Cube config okay so this is the command and what does this command do AWS eks update cute config what is the name of the cluster so in this case name of the cluster is demo hyphen cluster hyphen one region is US hyphen East hyphen 1.
41:30 - 42:00 click on the enter button it will again take some time okay perfect so let's proceed with the deployment of actual application so let's go step by step first of all I'll deploy the 20848 applications pod using the deployment okay so for that what I am going to do is let me go to my GitHub perfect so this is the first command that I am going to run so this
42:00 - 42:30 command says let me show you from where I am copying it so I am using the same GitHub repository I'm not changing anything so I just copied this command from the GitHub repository and the first thing that I'm doing here is I am creating a forget profile like I mentioned the reason why I'm creating the forget profile is if you see here I am attaching the namespace called 2048 game I can deploy it in the default namespace
42:30 - 43:00 as well but I just want to cover one additional topic that's the reason why I am creating a forget profile as well there is no other reason you can create in your default namespace as well so demo cluster one Us East 1 ALB sample app I am providing the name as a sample application and this is the namespace where I am deploying game2048 so click on the enter button now if this will create a forget profile I will show you on the console that the
43:00 - 43:30 forget profile is getting created see what is it saying creating fargate profile ALB sample uh app on eks cluster this is the name of the cluster so once it is created if you go to the compute section and scroll down you should see another forget profile along with default and Cube system Cube system namespace you should also see the km2048 namespace let's wait for a second and see if that is getting created or
43:30 - 44:00 perfect now the forget profile is also created for us let's go to the console and see Let me refresh this page see here there is another profile and the namespace here is game204 right so now what's happening is I can create instances in both of these namespace and also this namespace now you might be thinking Abhishek why I have to do this for fargate so this is a simple concept
44:00 - 44:30 like in if you are using the bare metal kubernetes or if you are using kubernetes that you have created there is a concept called pod Affinity or you have no definity anti-affinity right whereas with fargate you have this concept so this is very unique to forget and every time you want to deploy to any namespaces on forget you have to do it right whereas if you are using ec2 instances you can avoid this step but that's a different process cool now I have the fargate instance as
44:30 - 45:00 well so let me take my application from the GitHub right now what I'm doing again is I am going to this 2048 app deploy Ingress dot MD and I am copying this command from here okay so now what will happen is this specific command Cube CTL apply minus F followed by this file right now what is in this file so this file has all the configuration related to
45:00 - 45:30 deployment service and Ingress you can also read the contents of this file take this file and open it in a browser see what exactly it is trying to do let me increase the font so it is creating a namespace called game2048 previously we created fargate profile and we allowed deployments onto the namespace but we don't have the namespace itself so I am creating the namespace called game2048 then this is the deployment right for the 204 right
45:30 - 46:00 application then this is the service that I'm using again so if we go step by step in the deployment all that you need to mention is the Pod specification right and as part of the Pod specification you see that this is the container image so you can use the same container image to deploy the same 2048 application that I'm using and the replica account I have mentioned as file just to make sure that if there are multiple requests this will still be able to tackle now I have taken this example from the eks official
46:00 - 46:30 documentation so you can also use it as is after that so this is a service that we are using uh so what the service does is you need to take the service for two things one is you have to make sure that the Target Port is the container put of your pod and the second thing is you have proper labels and selectors right so if you are if you see here the selector is app kubernetes dot IO name app204 right and this selector should
46:30 - 47:00 match this specific thing here right sorry this specific thing here so labels app kubernetes name app204 right so this way what is happening is my service will be able to discover the pots so using this label using the concept of labels and selectors service will be able to discover the parts perfect and after that I have created an Ingress so like I told you what we are trying to do with Ingress we are trying
47:00 - 47:30 to Route the traffic inside the cluster so I have couple of annotations here so don't Deep dive into these things at this point of time first of all let's try to do the demo and once you get a gist of it we can go ahead but if you see here like I told you in the theory part what you will do with Ingress you will just mention that if someone is trying to access the load balancer so in the Ingress class name here is ALB that means we are going to use application load balancer Ingress controller that will read the Ingress resource and
47:30 - 48:00 whenever it finds the matching rules it will forward the request to service coil 204 right what is service two zero four right this one and where is this service 2048 forwarding the request to this one available in this namespace right this is the entire concept of this yaml file I just wanted to explain you people so that you also understand what exactly is happening behind the scenes click on the enter button now each and every resource will be created
48:00 - 48:30 but keep in mind that we are only creating pod deployment service and Ingress but right now there is no Ingress controller that means there is nothing on the cluster that can understand this resource I haven't deployed the Ingress controller so without Ingress controller I can say that this resource is a useless resource because nothing will happen if you just deploy this so let's see if I am right or wrong so I'll just search for Cube CDL get pods minus n
48:30 - 49:00 came hyphen 2048 so the pods are in the pending state it will just take some time but once the pods are created let's watch the pots see one of the container is in running State and rest all are getting created I think by now everything is created perfect all the pods are in running
49:00 - 49:30 State and now similarly you can search for service Cube CTL get SVC minus n uh game hyphen 2048 perfect the service is also running State and watch carefully the service has the cluster IP the type is node Port but there is no external IP that means anybody within the AWS VPC or anybody who has access to the VPC they can talk to this pod using
49:30 - 50:00 the node IP address followed by Port but our goal is to make someone outside the AWS or someone who is your user or customer should access this thing so for that what we have done is we have created an Ingress right so Cube CTL get Ingress minus n game hyphen 2 0 4 right so if you see here Ingress is created it is saying this is my Ingress this is the
50:00 - 50:30 class host can be anything I mean you know anybody who is trying to access the application load balancer that is fine there is no address there is code but there is no address that means there has to be an Ingress controller once we deploy the Ingress controller you will see the address here why is this address useful you have to access this specific address to access the application from outside world right so the address is not created because there is no load balancer or there is no Ingress
50:30 - 51:00 controller now what we are going to do next is we are going to create a Ingress controller that Ingress controller will read this Ingress resource called Ingress hyphen2048 and it will create a load balancer for us it will not just create the load balancer but it will configure the entire load balancer now what do I mean by configuring load balancer creating load balancer is fine just creating an ALB will do nothing again inside the ALB you have to configure
51:00 - 51:30 what is the target group you have to configure on what port should it access the pods everything is created and taken care by the Ingress controller itself all the dingress controller needs is a Ingress resource perfect so do deploy the Ingress controller in our case to deploy the ALB Ingress controller or just ALB controller because we want to use application load balancer the first thing that you will do is again go to my
51:30 - 52:00 GitHub here the reason why I'm showing document multiple times is because I want everyone to try because this is a very very important thing now there is one option here called configure oidc connector so this is a prerequisite before installing or before uh going to this page called ALB connector add-on right you can directly do this but it will fail without the oidc connector thing like I told you the
52:00 - 52:30 reason why we need imoidc connector or imoidc is because the ALB controller which is running it needs to access the application load balancer right this is a ALB controller so controller is nothing but a kubernetes pod again so this kubernetes pod needs to talk to some AWS resources to talk to AWS resources it needs to have IM integrated so that's why we need to create this IM oidc provider
52:30 - 53:00 now this is done in every organization and every organization usually if they're on AWS they prefer to use the IM oidc provider itself because they might have spent a lot of time creating everything for users so copy this command that is in the document EK CTL utils associate imoadc provider if you think that you have seen this somewhere yeah you have seen it when I showed you the cluster right this is the cluster if you go to authentication tab
53:00 - 53:30 you see this thing called oidc identity providers and here you have associate identity provider that's what I'm doing from the command line so I'm just associating the IEM identity provider change the cluster name demo hyphen cluster hyphen 1 approve click on this button now the identity provider imoidc provider will be integrated perfect this is also done now let's head to the final step that is go
53:30 - 54:00 to the document called ALB connector add-on dot MD so this is a document that I'm talking about so this will be your final steps where you will use this specific thing in my day 22 ALB connector add-on so go step by step here the first thing that you will do is you are trying to install a ALB controller right so ALB controller in kubernetes any controller is just a pod
54:00 - 54:30 okay but for this part what you are trying to do additionally is you are trying to Grant this part access to AWS services such as ALB right because this ALB Ingress controller should create an application load balancer for us for that it has to talk to the AWS apis so firstly let's create a im policy for this one and we will create a im role as well so the IM
54:30 - 55:00 policy is I have already created the this Json so you can as it is create this Json so here I have created a im uh policy for it if you want to take a look at the policy what's inside it just copy it go to your browser and within the browser enter this you will see the exact Json what I am trying to do so this is a very big Json and you need to Grant all of these permissions
55:00 - 55:30 right perfect now the next thing that you will do now you don't worry uh this is a standard process there is nothing that you have to change here you might be thinking Abhishek you just gave me this one and you are just entering the command how will I learn about it no you don't need to learn anything here because this is provided by the ALB controller itself you can get this from the ALB controller documentation and if there is any change they will provide you the changes okay so I have taken it from the documentation itself just I am keeping
55:30 - 56:00 everything in one place in my GitHub repository then create the IM policy using this command all of these things are required for the ALB controller creation uh did I miss something here let's see uh AWS IM create policy this is the policy name it says that okay entity already exists so there is a policy that already exists in my account let me enter this command again and show
56:00 - 56:30 you what has happened here when calling the create policy operation a policy called AWS controller policy already exists okay to be fair what I'll do is I'll go ahead and delete it because let's say you run into this error again you might feel that uh Abhishek you already have it so that's why it is successful on your cluster so let me go ahead and delete it so that
56:30 - 57:00 we all will be on the same page go to the policies click on this one provide what is the name of the policy right this is the one and delete the policy search for it select it actions delete perfect the policy is deleted so let me create it one more time
57:00 - 57:30 now I have created the policy right the final thing that we'll do is we'll create the role I think the role will also be available so let me go ahead and delete the role as well because I tried previously right every time before I show you something I will ensure that it is working and I will ensure that twice or Thrice everything is working fine because anybody who is following these videos I want them to perform the demos
57:30 - 58:00 perfect let me delete this one as well and this is a very very important uh thing for your resume so anybody who wants to show proper experience on kubernetes this will definitely help them now the final thing is I am creating the role I mean I'm attaching this role to the service account of the Pod right that's the whole purpose right like I told you whenever the Pod is running the Pod will have a service account and for that service account you need uh
58:00 - 58:30 basically the role attached so that it can integrate with other AWS resources so here I have to modify my AWS account ID so whenever you are copying from my GitHub make sure that you replace that's why I have also not provided any account name there so perfect this is my account ID and then what I'll do is what is my cluster name modify that as well demo hyphen cluster hyphen one
58:30 - 59:00 perfect I hope this will be created fingers crossed the role is getting created I mean the IM service account is getting created and it is also getting attached with the role so now we will use this same service account in the application right so this is how you do in your organization basically you create some service accounts or developers create some service account depending upon the
59:00 - 59:30 requirement that they have let's say the developers want to talk to RDS what they'll do is they'll create a service account and they'll ask you to create a role with specific permissions see here it said that one error occurred and I am role Stacks haven't created properly you may have to check the cloud formation console okay let's try it one more time so sometimes you will face such errors don't worry we will try it one more time
59:30 - 60:00 see now it got created service account that exists will be existing service account okay let's see if there is something that goes wrong I can fix it during the live demo so I hope everything is correct but let's see if something goes wrong there I can fix it now let's try to proceed with the
60:00 - 60:30 creation of that application load balancer that we have been talking about sorry the ALB controller that we have been talking about so for that we are going to use the helm Charter and this Helm chart will create the actual controller and it will use this service account for running the pod okay it is already existing on my account that's fine uh then let's see if there is any updates to the helm chat perfect there are no updates so now let's install the helm chart
60:30 - 61:00 so again while installing you have to modify couple of things one is your VPC ID how do you get the VPC ID just go to your demo cluster and within the demo cluster if you go to the overview there should be somewhere I think in the networking tab perfect so this is the VPC just copy paste it then region you know what is the region U.S east one in my case
61:00 - 61:30 then provide the cluster name demo hyphen cluster hyphen one uh there is some white space here I think it is okay yeah we can remove the space perfect click on the enter button expected at most two arguments
61:30 - 62:00 unexpected arguments okay let us see what went wrong there okay where is it actually going wrong um why does it say like that let me try to add it here let me try this
62:00 - 62:30 yeah I think there is some white spaces issue uh I'll try to update this command in my GitHub page as well so that you know you don't run into this issue so now here I'm using the helm chat to install this AWS load balancer controller so again this will take some time not much time maybe it will take some yeah see here AWS load balancer controller installed now one final check that you have to do is you have to verify that this load balancer is
62:30 - 63:00 created and there are at least two replicas of it okay see here till now the ready state is 0 of 2 when it becomes 2 of 2 that means your load balancer controller is working fine okay because what this load balancer controller deployment does is it will create two replicas one is one in each availability Zone it will continuously watch for like I told you Ingress resources and it will create the ALB resources in two availability
63:00 - 63:30 zones Okay so let's try it not yet we can keep this in the watch state so that if there is any update we will come to know yeah this will take one or two minutes for both of the replicas to be up and running meanwhile you can also do this Cube CTL get pods minus a
63:30 - 64:00 sorry minus this is created in the cube system namespace okay just watch for it okay not yet getting created let's wait for a minute
64:00 - 64:30 okay I waited for some time still uh the replicas is 0 out of two if you see here it's three minutes and still the replicas is 0 out of two so now let's troubleshoot and fix this what is going wrong okay so how do I do that just you can Q do Cube CTL edit deploy now in your case this is not needed I know why this is happening on my cluster in your case if you follow the same commands you will not run into this issue because I did the demo before uh there are some stale
64:30 - 65:00 resources that's why I think I'm running into it so let me copy paste this one so I'm trying to debug it so let me go to the Cube system namespace here if you go to the status field usually you will see what is the error or else you can also do the describe so what does it say pods AWS load balancer controller is forbidden error looking up service account Cube
65:00 - 65:30 system AWS load balancer controller if you remember when we were creating the service account then we saw the issue right so this was the service account command that we have used and when we tried to create the okay not this one service account yes so this was the service account command that we try to use and when we were creating it gave an error that you
65:30 - 66:00 know there is a cloud formation that is already available so that's why the service account was not getting created so let's try to fix it now we need this service account only once it is created then we will uh solve that error right so let's try to fix it and let's try to redeploy the deployment it says Service account that exists in kubernetes will be excluded use override service account I'm overriding the service account like I told you in your case this might not be required
66:00 - 66:30 because I was using the same cluster for the demo uh that's why we have this metadata of service accounts that exist in kubernetes will be updated so let's see uh now because we have overwritten if it works if not I'll go to the cloud formation templates I'll delete it and I'll fix it back so troubleshooting is very common when we do the demos and that is also good right okay so uh the service account got
66:30 - 67:00 created here the only thing that I had to modify was uh there was a error with creation of service account so what I did was I have updated the name of the service account in your case you don't have to update it but in my case there was a previously existing one so what I have done is I went to the cloud formation templates which was creating this service account and I've deleted the stack that was creating this service account and for a safer site I have also updated the name of the service account because sometimes it takes some time so
67:00 - 67:30 I have updated it but in your case it is not required uh now let's proceed with the next one this one here right so we have created the deployment but uh you know because the service account was not proper so that got deleted so let's go with the creation of deployment one more time so here I will update the name of the service account as well again repeating
67:30 - 68:00 in your case it is not required let's say for some reason you will also land into this issue because of some network issues or something how will you debug just go to the stacks cloud formation stacks and delete the stack that was creating the service account because sometimes because of some intermittent network issues or something also this might happen now let me try my luck now I'm hoping that cannot reuse the name that is still in use okay that is because the helm chat uh let me just
68:00 - 68:30 delete the helm chat Helm delete this is a advantage of demo right so whenever I do some issues I will fix it and you can also see it live now let's rerun the command fingers crossed hoping this to work fine now
68:30 - 69:00 perfect now the application load balancer is installed but the main thing is the pods have to be in running State Cube CTL get deploy minus n Cube system so the available replicas have to be two by two till then we need to wait so let me see again in the edit section if there is some error Cube CTL edit deploy followed by this minus n Cube hyphen system
69:00 - 69:30 okay at least now there is no error till now so I think it is getting created it will take some time okay still not created perfect now it got created if you go to the deployments as well you will see two
69:30 - 70:00 by two replicas perfect so this is the perfect state of application load balancer controller not not application load balancer this is the AWS load balancer controller or application load balancer controller now let us see if this AWS load balancer controller has created a ALB or not right so if you go to ALB that means if we go to ec2 and inside application load balancers you need to see a load balancer perfect I am seeing a load balancer and this load balancer
70:00 - 70:30 has to be just created see it is just created August 4th the time is 157 and here it is 156 so that means it is just created and what is the name k8's game2048 and how this is created this is the whole logic you need to understand so who created this load balancer this load balancer controller has created how did it create because we submitted a Ingress resource Cube CTL get Ingress minus n Cube hyphen system
70:30 - 71:00 so here sorry uh hyphen game hyphen 2048 right so that was in the game hyphen 2048 namespace and this Ingress resource here we got the address so this address over here what does this reply I mean what exactly is this address so this address is the load balancer that the Ingress controller has created watching this Ingress resource right so I have
71:00 - 71:30 previously also explained the Ingress controller will watch for the Ingress resource configuration provided in the Ingress resource and create a load balancer now let us see if it is correct or not so if we go to the browser here and if you go to the load balancer section right so here you will find a load balancer uh let me increase the font a bit so see this is a load balancer and if you click on the load balancer that got
71:30 - 72:00 created this will be the same IP address so if I copy paste this one and if you try to compare that on here so see this is the same specific IP address right so this is the same thing that is available in the address section that means the Ingress controller has read the Ingress resource and created this load balancer now what you need to wait is you need to wait till this load balancer state is active now this took couple of minutes
72:00 - 72:30 for me or previously took three to four minutes this time it took two minutes only but in your case be patient wait till this status turns into active State and meanwhile you can also verify what are the other configurations that are available here for example on which Port is it listening right what are the security groups everything is configured by the Ingress controller itself just by watching this you will get little bit more information but wait for the status to be active
72:30 - 73:00 now once the status is active what I'll do is I'll just copy this URL click on a new tab and as soon as I click on the enter button oh sorry uh HTTP colon oh sorry this one right copy and remove this slash perfect so this is the 2048 game and now you can play this game and let me know in the comment section who has got the
73:00 - 73:30 highest score just kidding but yeah so this is what we exactly deployed and we are accessing the load balancer and through the load balancer we are getting the request right no I think you have understood how this entire load balancer is created how usually this works in an organization so for each micro service it is responsibility of devops engineer to just write the Pod deployment I mean if they are using deployment they'll write the deployment.yaml service.aml ingress.aml and it is your one time
73:30 - 74:00 responsibility to create the Ingress controller and rest is taken care by the Ingress controller so that is what we have seen in today's video because it is a eks cluster the configuration of Ingress controller is little bit tricky because we have to create a service account and we have to attach that service account with the IM profile or IM role but otherwise on an on-premises kubernetes cluster this much is not required you can just assign service account with proper R back anyways this video is to
74:00 - 74:30 explain things through ecas so I hope you found it useful please let me know in the comment section what is your take on this one are you going to try and are you going to share it with your friends and colleagues this is a very good addition to your resume thank you so much again for watching this video take care everyone see you all in the next video bye bye