Google Cloud Platform Full Course | GCP Tutorial | Google Cloud Training | Edureka
Estimated read time: 1:20
AI is evolving every day. Don't fall behind.
Join 50,000+ readers learning how to use AI in just 5 minutes daily.
Completely free, unsubscribe at any time.
Summary
The Edureka video titled 'Google Cloud Platform Full Course' provides an extensive overview of Google Cloud's suite of services, emphasizing the technical dynamics of cloud computing. It covers detailed modules on Google Cloud's infrastructure, services like Compute Engine, App Engine, Kubernetes Engine, and Cloud Databases, alongside practical demos. It explores operational functionalities, strategic service deployment, pricing models, and offers insights into career roles within the GCP framework, targeting both newcomers and seasoned architects.
Highlights
Insightful analysis of Google Cloud's market growth and competitive edge 📈.
Step-by-step guidance on using Google Cloud's various services and tools ⏩.
In-depth modules covering Compute Engine, App Engine, and beyond 🏗️.
Real-world applications and case studies demonstrating GCP's capabilities 🔍.
Comprehensive guide to GCP certifications and career paths 🎓.
Key Takeaways
Google Cloud's extensive service offerings make it a versatile choice for businesses and developers 🌐.
The platform offers cost-effective pricing and a range of services that cater to different needs 💰.
Modules range from basic introductions to advanced cloud management techniques 📚.
Emphasis on hands-on practice and understanding GCP's infrastructure for real-world applications 🛠️.
Career insights into becoming a Google Cloud Architect and exploring certification benefits 🏆.
Overview
In the realm of cloud computing, Google Cloud Platform stands out with its broad array of services and user-friendly pricing. This tutorial by Edureka dives deep into understanding cloud ecosystems and transitioning businesses and careers into this digital transformation.
The video course is structured meticulously into modules that detail Google Cloud's extensive service offerings, such as Google Compute Engine, App Engine, BigQuery, and others. It provides practical knowledge through step-by-step demos, making complex concepts more approachable.
For those looking to specialize or certify, the course sheds light on Google Cloud certifications, offering guidance on the best practices and strategies to excel as a Google Cloud professional, making it an invaluable resource for aspiring IT professionals.
Google Cloud Platform Full Course | GCP Tutorial | Google Cloud Training | Edureka Transcription
00:00 - 00:30 [Music] in the last decade we have seen the rise of cloud computing and how it has taken the technical World by storm th we see many companies wanting to move their business to cloud and many individuals wanting to make a career in this domain that is why we see that many cloud service providers coming up and providing wonderful Cloud Computing Services to many organizations and many
00:30 - 01:00 people so one such a cloud service provider is Google Cloud platform which is also the home to popular services like YouTube and Google search engine hello all I welcome you all to this session bya where we are going to discuss Google Cloud platform in detail so without any further Ado let's take a look at today's agenda so in the first module we are going to have an introduction to Google Cloud platform where we will first understand what is cloud computing then what is Google Cloud platform and finally why one
01:00 - 01:30 should go for Google Cloud platform so in the second module we are going to see the differences between the major cloud service providers so the three major cloud service providers are AWS Azure and gcp and we will understand the comparison between them so in the third modle we are going to understand the concepts of Google Cloud platform so where we are going to understand the global infrastructure of gcp and also we are going to have an overview of gcp products and services then the fourth module which is Google Cloud compute
01:30 - 02:00 engine we are going to first understand what is infrastructure as a service and we also going to have an introduction to features and application of Google compute engine so in the fifth module which is Google Cloud app engine we are going to have an overview of Google app engine and have an architectural understanding of the app engine and also we will Implement an app through Google app engine then in module six which is Google Cloud and to we are going to have an overview of Google Cloud andthose as well as going to understand the features and benefits of Google Cloud Enos
02:00 - 02:30 then in module 7 which is gcp networking we are going to understand what is Google Cloud bpc it is a virtual private Cloud we will have an overview of gcp networking Concepts then in the eighth module which is gcp database Services we are going to understand the types of gcp databases and their services and also we are going to see the deployment techniques of gcp database then in module 9 which is Google big table we are going to have an overview of it we will understand the architecture and data model of Google big table then the
02:30 - 03:00 module which is Google bigquery we are going to have an overview of Google bigquery and we will also go through the bigquery architecture and explanation of bigquery storage then in the module 11 which is Google kubernetes engine we will have an overview of Google kubernetes engine and also we will be deploying a containerized web app on gcp then in the module 12 which is gcp terraform we are going to have an overview of terraform and we will also look at the tools for terraform in gcp and also the support provided by terraform or gcp then in module 13 which
03:00 - 03:30 is gcp Security Services we are going to have an understanding of cloud security Command Center and also the cloud armor then module 14 which is gcp in and access management service we are going to understand the working concepts of Google Cloud platform identity and access management and also the security enforcement through I am then module 15 which is Google Cloud AI platform we are going to understand the AI building blocks in Google Cloud platform on also
03:30 - 04:00 the AI Solutions in Google Cloud platform then in module 16 which is billing in gcp we are going to understand the billing accounts and payments profile in gcp and also the charging cycle and billing contexts done in Google Cloud platform then in module 17 which is gcp best practices we are going to see the best practices for different products and services in Google Cloud platform then in module 18 which is Google Cloud certification we are going to understand the different types of Google Cloud certification ation and also the major role based
04:00 - 04:30 certifications then module 19 which is how to become a Google Cloud architect we are going to understand the gcp architect career growth jobs salary and future scope we will also understand the roles and responsibilities of a gcp architect as well as how to correct a gcp architect exam then finally in the module 20 which is gcp interview questions we will understand gcp interview questions and answers and also the skills required to become a cloud
04:30 - 05:00 engineer with this I come to the end of my agenda before we begin do consider subscribing to my YouTube channel and hit the Bell icon to stay updated on trending Technologies and also if you are interested in online training certification in Google Cloud platform check out the link given in the description box [Music] below what is cloud computing well cloud computing is the ond demand availability of Compu computer system resources like
05:00 - 05:30 storage databases networking software and many more and guess what all of these resources are provided over the internet now you might be guessing why people opt for cloud computing right well you see cloud computing offers faster Innovation and flexible resources and the best part is you typically pay only for the services you use this in turn helps in lowering your operating costs and thus ensuring your infrastructure runs more efficiently now
05:30 - 06:00 you might be wondering how many cloud service providers are there right well this come to you by surprise there aren't a few but a lot of them some of the major cloud service providers in the market are Amazon's AWS which is known as Amazon web services with the market share of 31% that is the largest market share of any cloud service provider second is Microsoft Azure with the market share of 20% and third is Google Cloud platform with a market share of 7% but if you see in the last 3 years
06:00 - 06:30 Google Cloud platform has the highest growth in the market which is 58% you can also see some other popular cloud service providers here like Alibaba Cloud which is a Chinese cloud computing company and a subsidiary of Alibaba group it is the largest cloud computing company in China and similarly there are many others like huway IBM and a lot of them like VMware and Phoenix naap now that you know what is cloud computing let's take a look at some of its popular applications so first we see Cloud for iot which is a massive Network that
06:30 - 07:00 supports iot devices and applications that includes the infrastructure servers and storage needed for real-time operations and processing an iot Cloud also includes the services and standards necessary for connecting managing and securing different iot devices and applications moving on to Cloud for machine learning artificial intelligence and machine learning are steadily making their way into Enterprise applications in areas such as customer support fraud detection and business intelligence
07:00 - 07:30 so there is every reason to believe that much of it will happen in the cloud the top cloud computing platforms are betting big on democratizing artificial intelligence over the past 3 years Amazon Google and Microsoft have made significant investments in artificial intelligence and machine learning from rolling out new services to carrying out major reorganizations that place AI strategically in the organizational structure like Google CEO Sund Pai has even said that his company is Shifting to an AI first world now let's see the next application that is disaster
07:30 - 08:00 recovery as we all know that data is the most valuable asset of modern day organizations so its loss can result in irreversible damage to your businesses including the loss of productivity Revenue reputation and even the customers it is hard to predict when a disaster will occur and how serious its impact will be so what Disaster Recovery in cloud computing does is it demands strong critical data and applications in cloud storage and and failing over to a
08:00 - 08:30 secondary site in case of a disaster Cloud Computing Services be accessed from anywhere at any time backup and Disaster Recovery in cloud computing can be automated requiring minimum input on your part so now let's move ahead to understand thoroughly what is Google Cloud platform offered by Google it is a suit of Cloud Computing Services that runs on same infrastructure that Google uses internally for its end user products such as Google search Gmail file storage and YouTube along with a set of management tools it
08:30 - 09:00 provides a series of modular cloud services including Computing data storage data analytics and machine learning for organizations with large amounts of data to store or analyze Google Cloud Storage prices are up to 20% cheaper than kws and the price of database Services also compares favorably while there's no difference in the price of container services Google cloud is an industry leader in the field and is also investing heavily in Ai and machine learning technology es many
09:00 - 09:30 small and large Enterprises are increasingly adopting Google Cloud platform since it engages things and makes them more secure at reasonable cost so now let's move on to understand why we should prefer Google Cloud platform so now that you have a brief idea of what is cloud computing is and what is Google Cloud platform let's understand why one must go for Google Cloud platform we all know how big is the database of Gmail YouTube and Google search is and I don't thing in recent years Google servers has gone down it's
09:30 - 10:00 actually one of the biggest in the world so it seems an obvious choice to trust them right so now let's take a look at some of its features of google. platform what really gives it an upper hand over other vendors so first one is it has a better pricing than its competitors it is highly scalable and it uses Auto scaling to automatically adjust the sum of virtual machine stenes that are hosting your application so what it does is it allows your application to adapt to different varying amounts of traffic
10:00 - 10:30 now if you see the third one with custom machine types you can create computer engine virtual machines with optimal amounts of virtual CPU and memory now coming to Google Cloud's iot code which is a fully managed service to easily and securely connect manage and ingest data from globally dispersed devices Google Cloud apis if you see like they allow you to automate your workflows by using your favorite language so this Google Cloud API ecosystem consists of computer API storage API Big Data antic API Network working API and several several
10:30 - 11:00 others now moving on to the sixth one that is like big data analytics so Google Cloud smart analytics Solutions are fully managed this multicloud analytics platform empowers everyone to get insights while eliminating constraints of scale performance and cost big data analytics use realtime insights and data apps to drive decisions and Innovation with Cloud AI gcp Cloud functions is the easiest way to run your code in the cloud so moving on to the last one that is serverless Google Cloud platform is highly available and fault tolerant let's now
11:00 - 11:30 look at some popular advantages of Google Cloud platform so first one is like customers gets higher time and reliability if a data center is not available for some reason the system immediately falls back onto the secondary Center without any service Interruption being visible to users moving on to the next one economic pricing so Google's economies of scale let customers spand less so it works in a way like Google minimizes overhead heads and consolidates a small number of
11:30 - 12:00 server configurations it manages these through an efficient ratio of people to computers now moving on to the last popular advantage that is of course the higher security so Google's investments in security to protect customers so customer benefit from process based and physical security Investments made by Google so Google highest leading Security Experts that's how Google provide a higher security to its customers so let's take a quick demo now on Google Cloud platform okay so just simply go to Google CL
12:00 - 12:30 platform what you can do is you can just directly go to the console if you don't have an account on Google Cloud platform then create one because it's a good platform to have your account on and it will ask for your credit and debit card details while creating the account just for the verification purpose only let me first introduce you to the platform that the Google has to offer to us okay before that remember one thing like Google Cloud platform provides you like a free trial for like 90 days okay and it it has certain limitations also but
12:30 - 13:00 if you like need the complete package and need complete access then you can like go for a paid version also so that's how it start this is how interface looks like you can see there's a project information is here like when you create the account there will always be a inbuilt project will be there but you can like go here to my first project and here you can like go to new project and you can create a new project with whatever name you want to give okay now coming to the dashboard so I have like explained you previously right products and services so here are like Production
13:00 - 13:30 Services provided as of networking and of storage and of compute okay so let's see how to create an instance in Google CL platform so just go to compute engine and you can go to VM instances will take a little time okay so you can create an instance from here you can like name your instance whatever name you want to give to the instance okay then you can fill in the details if you want to do some customization you can do from here and you can then just go as per your requirement create it so the instance
13:30 - 14:00 will be created it will take a little time but it will be created so the instance has been created so remember one thing like uh as I've told you like I have a free trial so it has a little limitation right so here limitation is that I cannot create an instance for Windows from here okay I have like Window OS so or lus Os only that's the limitation so get it for the complete access for Windows also so you can go for a paid version okay so right now I'm just deleting this you can just go here and read the instance I'm not
14:00 - 14:30 able to launch it here we can go to some other service and we can use that service of Google CL platform not if not the instance thing you can go for the other one first it gets deleted let it get deleted takes a little time so you can just go out of compute engine we can use the service of a storage okay like we can store files or whatever kind of files you want to store you can use the storage for that okay cloud storage so yeah remember I have like explained you about storage there is this a bucket system in Cloud storage if you remember like we have the drives in our laptop
14:30 - 15:00 right that we have folders and we can store files in that similarly here is the bucket system so you can create a bucket from here and remember that while naming the bucket it name should be globally unique okay if like there's a similar name of any bucket is there like anywhere in the world they have created with the same name then you can't create that okay use some unique word for that okay like I'm using here bucket 6622 okay continue so you can see like there is no bucket name name 6622 here
15:00 - 15:30 you can like choose the accessibility for it if you want to access for multi region dual region or single region so here I'm choosing single region and then I will just create it it takes a little time yeah okay done so remember that here you can like create folder upload folder just like your hard drives okay or you can upload files so let's upload a file so this is like an exe file this is an image file whichever file you want to upload you can upad so yeah let's upload a EX file here and any kind of file like video file or any PSV file txt file whatever kind of file you can
15:30 - 16:00 upload any kind of file here okay yeah so it got uploaded and remember that this is the life cycle thing what you can do is retention permissions everything is there so in life cycle what you can do is while you are storing a file or anything any kind of file but you feel like you want to delete it after 15 days or 30 days or any kind of days any number of days so you can go to life cycle and you can like choose its time period okay so what happens is if you forget like you want to delete it it will automatically get deleted after some days we given for 30
16:00 - 16:30 days or 15 days it will automatically get deleted in also you can delete it from here like you can select it and you can delete it from here or also you can download it if you want to download whenever you want to download you can download it from the download option now we are deleting it so yeah it's permanent action so it's permanently got deleted okay also if you want to like delete the bucket also you can just go back here the bucket section we can go all the buckets are given so here we have created this bucket we can just go select this and delete it all you have
16:30 - 17:00 to do is just type delete here just delete it okay I hope you have understood this [Music] now now let us move on and compare this three cloud service providers let us first compare them based on the market R and growth rate now according to statista AWS owns 32% of the total Cloud market share Amazon report Ed that Amazon web services Revenue was $13.5
17:00 - 17:30 billion in the first quarter of 2021 which actually exceeded the analist prediction of $13.1 billion and when you compare this to the first quarter revenue of 2020 which was 10.33 billion we can see AWS Revenue grow more than 30% in a year in the first quarter of 2021 AWS Revenue accounted for 12% of total Amazon's revenue and nearly 47% of Amazon overall op operating income Amazon CEO Jeff Bezos has said AWS has
17:30 - 18:00 an unusual advantage of a 7-year Head Start before facing like-minded competition as a result the AWS services are by far the most evolved and most functionality Rich next let us take a look at the market trend for Azure according to statis St Microsoft Azure owns 20% of the total Cloud market share now unlike Amazon Microsoft only reports the growth rate and not the revenue it reported 50% Revenue growth over over the previous quarter which was better
18:00 - 18:30 than the 46% growth the analyst expected in 2020 Microsoft reported that its commercial Cloud officially hit the 50,000 billion Mark for its annual run rate next looking at the market trend for gcp according to stisa gcp owns 9% of the total Cloud market share in the first quarter of 2021 Google Cloud reported revenue of $4.0 billion which was an increase of 46% compared to the previous year also so the operational losses were reduced to $974 million this
18:30 - 19:00 year compared to the losses of 1.73 billion last year now this was the market Trend and growth rate of AWS Azure and gcp now let us move on and compare them based on the availability zones but before we compare their availability zones I would like to Define what avability zones and regions now a region is a specifical geographical location where it can hosted resources and avability zones are distant location within the region that are engineered to be isolated from failure from the other avability zones
19:00 - 19:30 now talking about the avability zones of AWS it has the most extensive Global Cloud infrastructure it multiple avability zones are connected by a low latency high throughput and highly redundant networking AWS has 80 avability zones within 25 geographical location around the world and is also announced for 15 more avability zones and five more regions in Australia India Indonesia Spain and Switzerland comparing it to the avability zones in
19:30 - 20:00 Azure aure has 60 plus region with each region having at least three avability zones next coming to gcp availability zones its Global Network spans across 25 regions with 76 zon and is available to user from 200 plus countries and territories gcp has recently announced new regions in Sal Salt Lake City Las Vegas Jakarta and waro and will also expand its networks to nine more regions now this was is about the availability zones next let us see what are the top
20:00 - 20:30 companies using this cloud service providers First talking about AWS its services are used by top companies like Netflix Coca-Cola McDonald's uni liver ESPN and Adobe next coming to Azure it is used by many Fortune 500 companies and some of the companies which use Microsoft Azure clouds are Samsung HP BMW FedEx and pixel animated Studio moving on to gcp the top companies who use their services are PayPal Twitter
20:30 - 21:00 20th Century Fox PNG and King Digital entertainment now that we have seen what are the top companies which use this cloud service providers let us compare them based on the compute Services compute services are one of the core Services when it comes to cloud computing compute Services helps to create instances or virtual machines in minute and also scale up instances instantly if needed so in today's session we're going to compare this three cloud service providers based on the compute Service and Storage service
21:00 - 21:30 the primary compute service for AWS is Amazon ec2 the primary computer service for Azure is azure virtual machine and for gcp is Google compute engine now all the three services are equally powerful but unique in their own way each one has its own advantages and disadvantages like Amazon ec2 as 99.55% of the annual up time and can be tailored with variety of options according to users requirements on the other hand Azure virtual machines
21:30 - 22:00 provides enhanced security and hybrid Cloud capabilities but when you compare the cost Azure instances tend to get costlier as a computer size increases next talking about Google computer engine they're comparatively cheaper they come with a persistent dis storage and provide consistent performance next talking about the storage Services AWS offers a variety of storage option like S3 for object storage ebf for Block storage EFS for file storage and a few other storage
22:00 - 22:30 Services next talking about aure cloud storage this also includes object file dis queue and table storage they also have specialized services for data applications and many data backup Services now talking about gcp cloud storage they have a fewer storage as compared to the other two but they're more targeted for object storage gcp offers cloud storage it offers persistent disk for Block storage to be used with virtual machine and storage for storing the files for backing up
22:30 - 23:00 your data AWS provides a service called as AWS gler and aure provides a service called as Azure backup but Google does not yet have any backup Services next let us compare this three cloud service providers according to the pricing now all the three cloud service providers offer a pay as you go structure which means you only pay for the services you use now pricing would vary for different services like for computer services one cloud service provider could be be cheaper but it could be costlier for database services
23:00 - 23:30 and so on so just to give you a general overview of the pricing among the three cloud service providers gcp offers a slightly cheaper pricing model and has a flexible cost control which allows you to try the different services and features AWS charges you on an hourly basis whereas Azure charges you on a per minute basis and GCB provides per second building for its resources when it comes to a short-term subscription plan Google cloud and azour gives you a lot lot more flexibility but in certain Services
23:30 - 24:00 Azure tends to be costlier when the architecture starts scaling [Music] up how Google Cloud platform specifically operates globally and what kind of infrastructure it boasts of so global resources are multi-regional resources is what it has be it your big query data store cloud storage then you have your Regional resources right app engine instance and you have your Z zonal resource like your VM instance and
24:00 - 24:30 disk now let me explain what these things are okay so basically when you talk about an infrastructure what gcp does is it basically gives you zones right and regions now when you talk about your zones zones are nothing but your servers that you put forth to give you an example let's assume that you have a server in a place called as now I'm an Indian so my examples could be little Indian you might have noticed by now now let's assume that Google has its server in Mumbai okay let's assume that
24:30 - 25:00 so when you talk about the server being in Mumbai what it does is let's assume you are based in India and you want to run your business in India so if you want to reduce some of the latency issues or probably you want your data to be close to you also you want to ensure that your government compliances allow you to run your business and in that case they want you to run the business in India so in that case having a server in India is good right so I want to put forth my data on Google's L platform but I want to put forth that data in India so if there's a server in Mumbai in that
25:00 - 25:30 case I can put my data in that server right so that server here is called as my disk or basically my server or my zonal resource so Mumbai would be a zone for me right now when you talk about something like your server that server has to reside in a particular region now in that case that geographical location is called as your region okay so basically you'll have your servers and you'll have your regions and your data would reside in these different geographical locations now these
25:30 - 26:00 different regions across the globe are connected by low latency Network what that means is you can basically access all these data by using your internet and stay connected to these different regions and different zones let's try and understand them a little better what I'll do is I'll just go ahead and switch into the web page where we can discuss the global infrastructure of Google Cloud platform that is so when you talk about the Google's
26:00 - 26:30 infrastructure right you can see some information here while it loads it says that it spans in 22 regions 67 zones and 140 points of presence it has 96 cdan Edge locations I'll tell you what Edge locations are and that is what it offers now meanwhile this loads up let's quickly see what CDN are these are nothing but content delivery Network now there's a networking service called as CDN we'll talk about that but what it does I mentioned the fact that when you
26:30 - 27:00 talk about these Cloud platforms basically these are globally residing so let's assume that I put forth my data in India and my business is in India but there are customers who are accessing this data outside India so for them to put forth a web request where they search for a particular data if the server has to come back to India F the information again send it back to the user it could take time what if I have a pseudo server that is close to that particular particular user and whenever a user makes a request a copy of that
27:00 - 27:30 data is maintained which is more frequently accessed right so anybody who throws in a request to that data it can be fed from that server there and there right so this location is called as Edge location and the edge location is nothing but something that is provided by your CDN that is content delivery Network so it is the job of this particular service to ensure that it helps communicate between these various endp pointer requirements let's see what start exploring has to offer to us there you go so this has been updated last
27:30 - 28:00 time I checked it it wasn't like this but last time when I checked it was way back so I think it must be a while that this has changed so these are the different locations in which you have your data center so I called about Mumbai so I don't need to say that suppose in Mumbai we can be certain that there's a region in Mumbai okay so these are the different Global regions where you have your data centers okay dear audience I'm not sure whether you can hear this music or not it is the the Google Cloud platform playing this music
28:00 - 28:30 so I believe the gist is clear to you people right uh if you click on these icons you'll get information about how this network is connected right and then you can explore it so what are the CDN pops right so Los Angeles can connect to these networks right or these CDN pops so I'm going to close this because it's not good for viewing when the noise comes in so as you saw on the previous screen right that is what the infrastructure of G gcp looked like okay so I believe the basic idea is clear to
28:30 - 29:00 you people we'll understand these services and these Edge locations and all those things would be clear to you when you talk about AWS and Azor they to have a similar approach where they have availability zones and regions availability zones in their case are nothing but their data centers and their regions are nothing but their geographical locations so if Mumbai is a region they'll have a zonal resource there or an availability zone so they work on similar platform and this is how the infrastructure of Google works like
29:00 - 29:30 or Google Cloud platform rather where you have your zonal resources or availability zones which are put forth in a region and all these are connected by low latency Network for fast data transfer and communication so this is what the global infrastructure is let's now go ahead and discuss some of the other pointers now when I say the other pointers let's try and understand gcp service domains now when you talk about gcp service domains there are a plenty of service domains that gcp has to offer the reason for that is there are so many
29:30 - 30:00 services that gcp offers now each of these are the major domains that gcp operates in and when I say major domains all of these domains will have subservices or services that fall under these domains say for example when you talk about compute now there are different compute Services which are IAS in nature pass in nature even serverless in nature right similarly you have storages and database now this is where you can store store your data right when you talk about storage this is where you
30:00 - 30:30 store your data in different files formats right and database is something that lets you work with this data networking now when you talk about a cloud platform like Google Cloud platform what it does is it has its resources that are spawning in different or existing in different parts of the world now assume that you want to host applications right so we are not just dealing with computation storage database there could be a number of services that are intertwined so where is the data coming from what is the amount in which it is coming what are
30:30 - 31:00 the metrics that I need to track this is something that your management tools tell you and data transfer is something that lets these applications communicate with each other so these are some of the core domains when you talk about gcp now there are more but these are the core ones so if you want to be a good developer maybe a good architect and even a good administrator these are some of the core services that would be core for any gcp exam that you give so it it is important you understand majority of these services so I believe the overview
31:00 - 31:30 of these Services is clear to you people let's Now quickly go ahead and take a look at the first domain that is the compute domain so these are some of the popular compute services that gcp has to offer now there are more than these but these are the important ones let's try and understand them one by one now the First on our list is the compute engine which is more of an IAS service now this is where the service models that we discussed would add more value to your exploration so when you talk about the
31:30 - 32:00 compute engine basically it is an i offering as I've already mentioned and what basically you can do is you can launch or spawn your virtual machines now in case if you need and a Centos operating system virtual machine or basically you need your Linux operating system or Windows operating system you can basically go ahead and launch those Services by a mere click okay so um not by a single click but few clicks is what
32:00 - 32:30 will let you have those engines up and running apart from that you have your app engine now this is where your platform as a service application comes into picture I discussed elastic be stock as well right so you can basically go ahead and do that as well here you will be provided with a platform you can decide what kind of platform you want there right you can choose whether you want it you want a python base right you want some PHP base you can decide that what gcp will do is by using compute engine it will launch a virtual machine
32:30 - 33:00 by itself you can just go ahead and put forth your data and application on top of it everything else would be managed by your app engine then you have your container engine now container engine it is also known as your Google kubernetes service now when you talk about kubernetes it is a container Management Service basically what kubernetes does this it lets you launch these containers in which you can basically put forth your data your bin files and all those things and it will start running you
33:00 - 33:30 just pick up that container and put it on any base and it will start working for you let's assume that you have a multi-purpose charger right where uh your phone gets connected once it does and in front you have 10 different ports and you can connect that to any kind of socket that is there right so that is what your container is it will basically taking your data in your case if you're charging your phone right your phone is your data so it will take that data it will connect it or put it in that container and that container can be used
33:30 - 34:00 anywhere on any kind of platform or uh base basically so that is what container engine does and container registry is one of the services that lets you do that or supports that activity and then you have your Cloud functions now this is your serverless offering that gcp has to give to you so this is what some of the popular compute services that gcp has to offer are let's now go ahead and get into the demo part we've been talking for an hour almost so let's just go ahead and see how the gcp compute
34:00 - 34:30 Services pan out how they work okay okay so what I've done is I've actually gone ahead and signed into my Google Cloud platform account now when you talk about your Google Cloud platform account what gcp does is it basically offers a free tier version to you which you can use and you can practice quite a few things by using this free tier account so when you have a free tier account it comes up with certain set of benefits
34:30 - 35:00 like all the basic Services can be implemented there and it what gcp does is for a certain duration it provides n number of units of money for you to use so you can use that free units or those free units to spawn your instances launch your storage Services database services and stuff like that so what you have to do is you have to just go to Google Cloud platform free tier create an account they'll ask for your credit card that is for verification in case if you're using paid Services they'll
35:00 - 35:30 charge you but they give you a list of free services so you can always use those free services right so um once you do that once you given your credit card details they'll verify your account and you'll be having an account that is ready to use for you so ensure you go ahead and create a free tier account for your usage okay once that is done uh you can always come to this platform and start using it you can see that all the major Services be it your IM am a pis compliance security all these services are here okay we'll be discussing these
35:30 - 36:00 one one by one do not worry about that but meanwhile let me just scroll down the compute engine is the one that we'll look into right now app engine is your service where you have your pass offerings we'll see the IAS one right the basic one in your compute engine you have your services versions instances so a virtual machine is called as an instance when you talk about these Cloud platforms let's open one of those unable to find resource you requested there was an error loading it so let's reload it
36:00 - 36:30 okay no problem we'll go back and we'll try something else so let me go back let me just click on it and let me just go to compute engine this was the reason because I probably did it from the wrong place so you can go for virtual machines VM instances right let's click on VM instances so there's an instance that I created a while back this was just for the usage purpose I wanted to show you but let's not do that let's just stop this first now it might
36:30 - 37:00 take a couple of minutes for this to stop but I can instead just say delete so let me delete it are you sure you want to delete it yes I'm very sure I want to delete it so um it should delete this instance and once this instance is deleted I'll be having this interface for my usage okay and there are various other options that you can do you can see at the panel at the top where there quite a few things that you can basically control but meanwhile let let me just delete this instance and we'll launch a newer instance okay so let me
37:00 - 37:30 just refresh this and see how it pans out if it is still deleting it then it could take a minute okay let's just say create now during the course of this video I'll also show you how do these things work in other platforms as well I'm not going to walk you through them detail in detail rather but I'll just give you an idea as to how these works or those Services also work there so um let's quickly take a look at or just add a name here for our instance that we are planning to launch let's call it Sam
37:30 - 38:00 4321 labels do you want to add a label because if you create multiple instances with similar names it could be confusing so you can add a label as to what that instance is for you can see that I have this much amount of free credit somewhere around 21,000 Indian rupees for my usage so I can span these many instances if I use this particular instance for a month long it would charge me $25 in Indian currency if you say like 7 to 75 rupees for a dollar right that is
38:00 - 38:30 750 for 10 so around roughly 2,000 Rupees for an instance to run for the entire month if I keep it running which you might not need to but giving you an estimate right so it gives you the detailed information as to how long would this instance be running and how it would be working and stuff like that okay let us now go ahead and discuss some of the other pointers so it is a general purpose instance right if you can click on these things you'll understand quite a few other things right so basically when you say a general purpose instance it serves a
38:30 - 39:00 moderate performance of your computation and storage okay if you talk about compute optimized those are optimized for your computation and memory optimized are something that give you higher memory or better memory usage GPU is something that is used for your graphical processing okay so we'll stick to the general purpose for now this is its configuration okay you can choose what you want to you can also use a micro one okay so this is not a concern for us we are using it for a demo purpose you see each time you select one it will give you all the details it is
39:00 - 39:30 using two virtual CPUs and 1 gab of memory and I can choose what kind of machine that I want to use here if I click here I'll get to choose so I can use a Dean right I can use a GNU Linux so instead of Dean I can go for Windows and other things as well so I have the option of choosing other servers if I want to whether it's a Linux one red hat one and stuff like that let's stick to Cent for now or the dean one for now okay buster size is balanced if I go for standard and if I say apply now if you
39:30 - 40:00 scroll up you'll see that how much this one costs so it will cost me $6 monthly so it is fairly low right so from $25 we've come down to $6 why because we've launched us or planning to launch a smaller instance so these are the things that we can configure if you go ahead you have other things to configure as well okay or you can stick to the basic ones if you want to uh let's just go ahead and say create one and just like like that my instance would be created guys here you also have an option of
40:00 - 40:30 working on quite a few other factors right you can just go back to the instance and see what kind of virtual Network it falls under what kind of storage that is attached to it by default you'll have persistent diss attached to every VM that you launch we'll see what do those mean as we go ahead but just to give you a gist that is what it is yeah so it is giving me an notification this I can dismiss because I'm going to delete this instance Any Which Way once it's usage is done so if I select this why am I getting this
40:30 - 41:00 thing here I don't want it I'm just going to scroll it for now and it gives me different ways to connect I can use um SSH to connect to it I can use it in different browser as well open browser windows with private SSH key and use this command so this is the easiest way let's do that let's select this copy it and I'm going to say run in Google Cloud shell and within a minute my instance would be up and running and I should be able to connect to it there you go it has fired up my
41:00 - 41:30 terminal right and using the terminal it has put forth that command as well using which I can connect to the instance if I just hit the enter button do I want to authorize yep and just like that it would help me connect to my instance you see so it has connected to the instance that was there and I can actually go ahead and start using this Davian if I want to similarly if you go back to AWS right there's a service called as ec2 that lets you do the virtual machine job
41:30 - 42:00 and you can see that there are instances that you can create and launch you can always come here and say launch instances where you can choose different kind of instances that you want to okay it will also give you details about what instances are up and running and what are you planning to do with these instances and stuff like that okay so there are quite a few things that can be done here and you can use this AWS platform for similar things as well if you close it if you go back back to AWS Services it also has plenty of services that you can use so we'll compare these
42:00 - 42:30 Services how do they fair with each other and what kind of offerings these platforms give us but by now I believe the basic compute service is clear to you people how it works and what all can be done with it okay let's go back to our presentation and see what are the other pointers that we have to cover here so we've seen how compute Services work at least the basic fundamentals as to how work virtual machines on gcp work and we've seen how as they call it
42:30 - 43:00 compute engine rather works on gcp which is equivalent to AWS ec2 let's now go ahead and discuss how storage Services work on gcp now when you talk about gcp storage Services there are plenty of storage services that are there on gcp and the reason storage services are important is when you talk about Cloud platforms right it is very difficult to imagine any Cloud platform with without data and the reason is simple right I mean any application you create any kind of data
43:00 - 43:30 you put forth you are dealing with data right be it monitoring the data managing the data analyzing the data right processing the data everything happens on cloud and that is why when you talk about storage Services they're fairly important now when you talk about these storage Services they will store data in different ways and in different formats right you could be required to store your images you could be required to store your videos you would be required to store maybe your structure data and stuff like that right so first here you
43:30 - 44:00 have something called as cloud storage now it is a unified cloud storage service what this unified cloud storage service does is it lets you store data in the form of blobs or you can store your data be your images as I mentioned be your videos right be your other forms of data it can be your structured data unstructured data right so you can store that kind of data here but mostly the data that is stored here is stored in the form of objects or objects rather
44:00 - 44:30 and is stored in containers called as buckets now when you talk about other Cloud platforms something like AWS Microsoft Azor now on these platforms when you store this kind of data you have separate services for storing the data now you have something like S3 right which stores all these kinds of data then you have something called as EFS which stores files in the form of data you have EBS for Block storage here to have persistent discs for that but I
44:30 - 45:00 believe you get the just what I'm trying to say here apart from that you have your other data services like you have your big table that is cloud big table and your cloud data store now these are your nosql databases now when I say nosql databases I'm talking about those databases that basically deal with not only SQL data it can be your unstructured data as well now these are different from your cloud storage the reason is these are more towards the line of database Services where you can
45:00 - 45:30 process your data right you'll be having your data stored in a particular server and then you can process this data by using something like Cloud big table or your cloud data store when you talk about cloud data store it mostly works with hierarchical data whereas your big table is responsible for dealing with data that requires low latency right quick processing of data you can read write your data and it gives you highly can say high throw put performance for data analytics as well if you go ahead
45:30 - 46:00 you have your Cloud SQL now this is something that deals with your SQL data or your sequential data rather right so if you have structured data and you want to put it in databases and process it this is where your Cloud SQL storage will come into picture and then you have something like persistent disk now what is a persistent dis when you talk about persistent disc we are talking about block storage what is a block storage this kind of of storage always needs a host machine a host system to access
46:00 - 46:30 this kind of data right so when you talk about this kind of storage to give you an example let's assume that you have your hard disk right you can store your data in your hard disk but how you need to connect that hard disk to your system right to some other device where you can put forth that data and copy that data there right so it is dependent on the host machine but your block storage again comes in two part you have your ephemeral storage and persistent storage Emeral storage is something that dies with your instance okay that means if
46:30 - 47:00 there's a particular instance or virtual machine that you've launched or a system you've launched if you delete that particular virtual machine your data would die with it in case of persistent storage there's a slight difference here in your persistent storage your data won't die why because you are creating this storage separately and you can attach it to your instance you can detach it to your instance and it will will not die even if your instance does okay so this is what persistent storage
47:00 - 47:30 and these are what cloud storage services are like we'll understand these Services practically as well but before that let's quickly jump into the other set of services that is your networking Services now when you talk about Cloud networking Services again network is a very important aspect of cloud I believe I've already discussed this when we talked about popular service domains on a surface level level and the reason for that is when you talk about clouds as I've already mentioned your data could
47:30 - 48:00 be residing in different parts of the world and there could be different things that you need to control one of the major things that you need to have here is an umbrella of network that connects these different locations and places where your resources lie right so there would be a lot of things like who gets to access what how right what are the fir walls that are in place what are the subnets what are the IP addresses what needs to be assigned to whom right and this has to happen virtually because you cannot have physical subnets and networks right you cannot have physical
48:00 - 48:30 routers and stuff yes there would be some physical routers but all this network is basically connected over a virtual umbrella and that is why you need these virtual Network Services that gcp offers in this case of Google Cloud platform right so let's try and understand them as in what are these and what all can you do with it so first you have your virtual private Cloud so whatever I've just discussed right so you can create a virtual private cloud or Cloud virtual Network right where you can put forth certain set of resources
48:30 - 49:00 in that particular Network say for example you have like what 100 devices that you can assign in that particular Network so you can basically create subnets create sections decide how many IP addresses you want to assign to a particular Network or to a sub Network again decide what resources fall under that particular Network and what are the ways in which you can access resources from there right so it's a mix of various IM Services as well something that that we'll discuss as we move further but I believe by now the gist is clear to you as to what your virtual
49:00 - 49:30 private Cloud does okay or your Cloud virtual Network apart from that you have Cloud load balancing now this is another interesting service we discussed about those applications where we have our e-commerce website and it experiences traffic so if we stick to our Diwali example or New Year's example we are talking about lot of people shopping right in that case there could be a possibility that there's a lot of Burden on a particular resource a server in that case you have one option of scalability which we talked right you
49:30 - 50:00 can upscale horizontally and vertically as well now for people who do not know what horizontal scaling and vertical scaling is we are referring to two important pointers here the first one is how do you scale horizontally that means if there's a lot of load on a particular resource you can have multiple servers at attached multiple machines attached and there's something called as vertical scaling where you can actually go ahead and increase the computer ation storage power of a single machine so there are different ways to scale as well but what
50:00 - 50:30 does load balancing do what load balancing does is instead it diverts a traffic from a particular resource to the other to manage the load in case of existing infrastructure where do not need to scale up or down this is very good in terms of Disaster Recovery as well where you do not have to worry about if a data center goes down or stuff like that so say for example you have an instance where you have this your e-commerce website and it is experiencing a lot of traffic can you just transfer your traffic to some other node or some other instance or virtual
50:30 - 51:00 machine that is located in some other region or which is closed by so your traffic would be distributed yes absolutely you can do that and you can do that by using Cloud load balancing okay so these Cloud load balances they come in various forms right and there are different purposes for these there are application load balancers there standard load balancers and stuff like that okay so depending upon the applications they vary as well to give you another Layman example for these things you have something if you are in India again then you would know that the cosmo city is here experience a lot of traffic and if you stick to that traffic
51:00 - 51:30 you'll either get late to your work or you'll have to start a couple of hours early right so we always know our shortcuts where to move from to do what right so if there's a lot of load lot of traffic in particular area you can always take this other route and that is what cloudload balancing does in terms of your data right where should they move your data they'll move it to some other load balancer where it can be balanced better okay moving on you have your cloud or rather you can say content delivery Network okay so Cloud CDN or content delivery network is again a
51:30 - 52:00 Content distribution service okay so I'll tell you what this does for you let's assume I'm again going to stick to an Indian example now in India we have cities like Mumbai you have your Bangalore you have aabad right you have kanyakumari the reason I'm specifying these cities are let's assume that I'm based in Mumbai right now and I want to move to amabat which is in Gujarat or to Bangalore in Karnataka or kanyakumari down south okay right to the bottom of the country so depending upon their
52:00 - 52:30 locations the closest to me would be Amad from Mumbai right the next would be Bangalore and the last would be kanyakumari because the distance is lot let's assume that Mumbai is a server or for example let's continue with this example let's assume that I want to travel to these place and I use a similar vehicle let's assume I'm either taking a bus or I'm taking my personal car right so if it takes like what 10 hours to reach to aabad to take like around 15 to 20 hours to reach to Bangalore and somewhere around 25 to 30
52:30 - 53:00 hours to reach to kanyakumari depending upon how I leave from Mumbai okay so the point I'm trying to make here is the more the distance the difficult it is or the longer it will take for me to reach to a particular location so if Mumbai had a server and I had to fetch data from Mumbai server and I was based in these three different locations the earliest I would get the data would be in Amad and the latest I would get it is would be in kanyakumari right so what that implies is data latency can be a big issue so how does Google Cloud platform solve this issue it creates
53:00 - 53:30 content delivery networks or in simple words it has Edge locations so each Edge location is located close to a particular data center what they do is they fetch the frequently Access Data from the server and cach it there in that particular Edge location so that way whenever if I am trying to access this data let's assume from kanyakumari and if it has been cached there I would be accessing this data at a faster rate because the distance would be minimized thus resulting in low latency results so this is what content delivery network
53:30 - 54:00 does and then you have your Cloud domain name service now for people who do not know what domain name service is let's assume that we all use website like Amazon right or maybe our Google search engine so we put in an address there right the address can be something like www.amazon.com right so this is an address which is easier for you to understand in a Layman language right similar to if I have to visit to my friend's place right I'll be looking at the address that is given maybe flat number X Road number X area number X and
54:00 - 54:30 something like that right so what do these labels tell me is where does this person reside and it is easier for me to find this person based on this address similarly if you want to access particular set of data you need to know where does this data reside right your website address and that is something is controlled or governed by a domain name Services now with cloud or in this case with gcp you specifically have a service called as Google Cloud domain name service which basically lets you control
54:30 - 55:00 these domain names have access to data for people from different locations and stuff like that so the domain naming service that is controlled by Google cloud is cloud DNS which lets you do these domain naming service activities okay so this is what gcp networking services are and this is what these services do let us now go ahead and take a look at the demo part and explore some of these Services a little more okay so previously we saw how to create an instance right now we'll go ahead and explore the storage Service as well so
55:00 - 55:30 guys note one thing you might have not realized in the previous part that when we saw how to create an instance the fact that whatever thing I'm creating it goes in this my project right so it is a container where my gcp is storing all my resources okay so similarly to store your data you also need to create a storage account okay so but when you start this for the first time if you click on cloud storage it will ask you to create an account so make sure you do that you just have to enter a name for it and I think you should be good to go
55:30 - 56:00 ahead with it right now in my case I think I should be having a storage account already so I'm going to just go ahead and say cloud storage and once you do that no sooner this page loads this is where you get a notification when you click on create bucket it says that you cannot do that right away now again as I've already mentioned bucket is nothing but a container where you store your objects or your files right the data that you plan to store so you have to create a bucket first so when you create a bucket bucket guys please note this thing that these buckets have unique names you cannot just go ahead and say
56:00 - 56:30 that okay use this bucket and this this form okay you'll have to give a specific name to it and if the name is already been taken you will not be able to create that bucket so let's try and create one there are other Clauses as well if I'm not wrong you cannot use a Cap's name as well something like that but let's try it if that works so I'm going to name it gcp demo bucket I'm going to say continue it says that use only lower cases as I've already mentioned there's an error here so let's say gcp demo bucket and say continue this bucket name
56:30 - 57:00 already exists as I've told you the bucket has to be globally unique let's give it a terrible name for now this I'm sure should be acceptable so guys once we click on continue these are the other things that we need to take a look at or need to understand as in what kind of bucket I want do you want it Regional dual Regional or multi- Regional by the name itself it should be clear as in the Laten is reduce because in a regional bucket you are dealing with a single region single bucket all your work would
57:00 - 57:30 be concentrated there in Dell region you'll be having high availability low latency and you can access this data across two regions in multi- regions you can have data in multiple regions as well or buckets in multiple regions as well that is you can access this Bucket from multiple regions in this case we are sticking to the region one this is a basic demo apart from that you can choose what kind of region do you want to store or create your bucket in say for example I can select Mumbai India right let the basic one be here this is not very important for us so I'm just going to go ahead and say continue okay
57:30 - 58:00 and once I click on continue the next thing is I need to choose what kind of storage account I want to use right here so I have certain options here whether I want a standard storage near line one Cod line one or archive one you might read what is written here meanwhile I'll explain what those are say for example you want to store your data right it is Mission critical data and you want to access it right away okay okay so in that case you should store your data in standard access this is where no matter when you put your data you can retrieve
58:00 - 58:30 it right away okay no retrieval time is required nearline is something when you put your data there you have to put that data there for at least a month you cannot access it or retrieve it before that in some cases you might want to store your data for a longer duration maybe around a quarter or 3 months as we say that is where you can use your Cod line data and finally you have your archival data which is for a duration which is longer than one year now you might wonder and why do we have these kind of different brackets first of all the first one is the costliest here why
58:30 - 59:00 because there's no retrieval time I mean you do not have to wait for more than seconds right the latency is like in seconds so which is very less even milliseconds in some cases so which is very less so since if you're talking about Mission critical data it is always wise to put your data in these kind of storages or in the storage next is your near line and the others now what do these others imply is there could be a particular requirement for you to store your data for a particular duration post that you either might not want it or
59:00 - 59:30 might not want to use it right say for example there's a particular data that you don't want to use uh like is something that you're certain you will not use for years medical records could be one example or something like your school leaving certificate so let's assume that I go to my school and I collect a copy of my school leaving certificate okay say for example take my KSI passed out in 2007 my schooling okay so if I am to take a look at this information right and this for some reason for today it's like 2021 right so like 14 years I want that copy of my
59:30 - 60:00 school certificate so I can always visit my school and apply for that right but the fact is I'm accessing this data like what after 14 years right so this data is important to me but not something that I needed right away for Mission critical purposes right so what happens is I go ahead and request them for that birth certificate and they might say that okay come back after 2 days right because we'll have to go through so many records to fetch your data and something that you've not accessed in like 13 years so something like archival storage
60:00 - 60:30 is similar to that where you do not need to access your data regularly but you need a copy of it to be maintained somewhere so you can do that since the retrieval time is longer it's cheaper compared to other storages and why do you store data in these kind of things right let's assume that urea urea makes courses right what if urea had its database on something like gcp right in that case if I had to store a particular copy of my course which I do not use regularly I might put that copy in a cold storage or archival storage which I
60:30 - 61:00 do not access frequently right so the data would be there but it would not cost me a lot but what if there's a course that I do not access at all and I put it in standard storage now it is being charged very heavily I'm being charged very heavily for that but it's of no use right so that is what these different storage systems talk about okay I hope that is clear to you let's continue with the standard one because this is a very minor demo that we are creating what kind of access you want find grinder stuff let's not get into details of that it's not important let's just create a bucket and just like that
61:00 - 61:30 I'm sure a bucket would be ready you see within a minute you have not even a minute within click you have the buckets with you it says upload a file I'm just going to go ahead and say live stats and I'm going to open that bucket and there you go the live stats PNG file is uploaded so you can upload your videos here you can do quite a few other things here now again you might wonder as in is this the most efficient way to use this particular set of data are there other things that I can do with it right yes there are a lot of
61:30 - 62:00 things that you can do you can basically go ahead and assign bucket policies right as in you can decide who gets to access this file say for example this is a private bucket so storage is standard here and you can see Public Access it's not public so not anybody can access this data but me right and it says retention expiration date there's no date here why is that because I haven't set a life cycle policy something that I can do when I say assign a life cycle policy what that means is basically I can decide after a particular duration
62:00 - 62:30 what to do with this file now there's a certain set of data that might be useful to me in real time but that data might not be as useful after 3 months right so let's assume that there's a particular image that I put forth here and I realize that after 3 months I'm not using that image or I know that after 3 months I will not be using that image at all so in that case I can assign a life cycle policy to it I can say that in that life cycle policy I can say that okay for 3 months let it be in standard Storage Post that either move it to cold line or near line or directly move it to
62:30 - 63:00 archival because I might not be accessing that file for that longer duration any further in case if I feel that I'm just not going to access that again I can just set an expiry date and it would be deleted and garbage collected by gcp okay so that is what your life cycle policies let you do the other things that you can do you can delete your content here download it right you can uh upload your files and do other things as well so that this is what your bucket or your Google Cloud Storage lets you do now with AWS you have something like AWS S3 simple
63:00 - 63:30 storage service I'll show you what that is but when I discuss the IM am part that is when I'll show you because I want to also talk about multiactor authentication and stuff like that so meanwhile you can wait for it okay so this is what these buckets or these storage services do for you so I hope this is clear to you we also discuss the VPC part right so let's quickly see what VPC also does so do we have VPC here so for VPC I'll have to maybe scroll up or scroll down
63:30 - 64:00 VPC there you go and you say create your VPC networks okay so this is where you can create your virtual private Cloud by default you can see that these are the subnets that are assigned to it we will not create one but I'll show you what can be done here it's fairly easy you can definitely try doing it on your own but let's not spend too much time doing this okay so you can basically give a name maybe ABC as a network okay that's a terrible name but let's stick to that apart from that you can decide whether you want to automatically create subnets
64:00 - 64:30 or custom created what's the difference when you custom create it you specify what IDs IP addresses do you want to put under a subnet if you select automatic it will automatically distribute the IP addresses say for example you have 20 IP addresses you can decide what these 20 IP addresses need to be assign to what subnet okay so accordingly you can decide that based upon the region in which these exist okay so this is one and then once you do that you can also go ahead and create a firewall here so
64:30 - 65:00 what do we do with firewall with firewall you can decide who gets to access this network right where are you allowing data to enter into this network now let's assume that I have an instance we saw how to create a Centos instance right so if I have one line in this network in that case I need to decide how can I sign into this instance right am I sshing into it and does my virtual PR private Cloud allow me to do that so I can set up policies that let us control these things okay so this is how
65:00 - 65:30 the virtual private Cloud works and similarly if you look for CDN I'm sure you'll have that here as well because we've discussed that so you have your Cloud CDN as well so when you talk about your Cloud CDN you can add origin here so when you say origin you can decide where do you want to basically go ahead and cash your data from that is why you add your origin and stuff here okay so that is what your CDN lets you do okay so this is what CDN is all about let's now go back to the presentation and discuss the other services that we have with
65:30 - 66:00 us so let's now go ahead and try and understand how gcp security works now when you talk about security security is a very important aspect for clouds as well if you go back to like 2014 2012 people already questioned Cloud security the reason was there were many outages back then where people lost a lot of data but that has changed whether you talk about physical security where these data centers are governed 20 24/7 by physical resources or physical individuals also to the fact that where people do not know where these data
66:00 - 66:30 centers are actually located to visit yes we know there's a data center in Mumbai but you will not know where it is located very few people know about it apart from that if you talk about network security there are various practices be it your IM am where you can control identity and access management be it running Security checks through and through or be it setting up shared security models where you as a consumer get to control certain security activities and Cloud as a vendor knows that okay these are the things that we need to control and they also control
66:30 - 67:00 those security aspects so you get more security when you talk about Cloud now what are the things or what are the services that are there now when you talk about Cloud Security Services there are plenty of security services that are there in the market the major ones are you have your Google resource manager then you have your I am security scanner and platform security the others as well as I've mentioned but these are the core ones if you're talk talk about the cloud resource manager what a cloud resource manager does is it lets you set up a structure right we've talked about that
67:00 - 67:30 your projects hold all your resources it's a similar structuring that we are talking about since you put all your resources in a structure it becomes easier for you to decide who gets to access what and more importantly how okay now this is where your IM am also comes into picture what I am does is basically I am ensures that you as a user gets to control who gets to access what and how so we'll see this in the
67:30 - 68:00 demo part I basically have some IM am users through which I can access my AWS account which is very similar in Google Cloud platform as well I'll show you how that works okay so basically when you talk about Cloud IM am what it does is it helps you create users to start with so I can create say for example 10 users for 10 different people so I have an organization where I want these 10 different people to do different things so I'll be creating 10 different accounts so that when these people access the cloud they get to access this cloud from their account and I can
68:00 - 68:30 decide how much access to give to which individual right I as an admin I have entire control but others I can decide who gets to access what right to give you another example let's assume that you have a set of developers who might be needing access to all the developer tools and activities that concern it next is your analytics team who needs access to more management tools and not the developer tools so I can set up two groups right I can create two groups this is another concept where you can create groups and in these groups I can decide who gets to access what in that
68:30 - 69:00 group and accordingly create users and put them in that user group so basically they get to access only those resources first right then you can create also policies where you can do some service related work where you can decide what service requests to throw and all those things so that is where your policies and roles come into picture so that is what cloud am does basically lets you govern the identity and access management to your Cloud platform then you have your Cloud security scanner now
69:00 - 69:30 what exactly is a cloud security scanner when you talk about a cloud security scanner basically let's assume that I have n number of virtual machines created now these virtual machines would be accessed through different URLs right the applications that are based on top of them would have certain URLs and people would be accessing it from different resources what cloud scanner does is it basically scans or craws through all the links all the websites and applications that are being accessed and checks for security when you talk
69:30 - 70:00 about Google Cloud platform Security on the other hand it is a more generalized version which controls Security on top of Google Cloud platform so this is what or these are what the security services on Google Cloud platform let you do okay and what they are exactly let's now go ahead and understand something else and then we'll probably again jump back into the demo part so you have the managed ment and developer tools instead what I will do guys is I'll just go ahead and discuss all these services at a stretch
70:00 - 70:30 and then we'll explore them in the Practical part of things okay so let's try and understand the management and developer tools as well now when you talk about the management and developer tools management tools are something that let you govern all the management activities that are there on your Cloud right so say for example you have your monitoring and logging services so I don't know whether you checked it when I opened my gcp there was a dashboard there so it is something that can be created by using your monitoring and logging services so I'll show you once we get into the demo part so that is
70:30 - 71:00 where you'll get data about all those logs all the requests that you've made right all the services that you've launched and the CPU utilization and stuff like that apart from that you have other services like your Cloud shell Cloud apis right these let you control your applications more programmatically through command line interfaces where you can actually go ahead and code and then access these resources apart from that you have your Cloud console which limits or reduces the amount of coding that you need to do in order to access these resources and then you have your
71:00 - 71:30 Cloud apis which basically again I believe I've already discussed that so let's not get into that but apis and CIS that is what they let you do or Cloud shell that is what they let you do mobile Cloud app lets you basically monitor your data using your mobile applications that you can connect on your phone by using your Google Cloud so your data would be residing on cloud but you can monitor that data by using Cloud mobile apps and then you have your developer tools now this is where your developers come into picture when you talk about Cloud you are not going to just go ahead and put your data there
71:30 - 72:00 and manage that data right You' also be building applications that you want to use so can you do that on top of Cloud definitely you can do that on top of Google Cloud as well you have something called as Google sdks then you have your deployment manager and you have your Google Cloud Source repositories so sdks is something that lets you control your software development toolkits on top of Cloud and deployment manager is something that lets you control your container applications on top of Google cloud in terms of cloud Source repositories it is something that basically lets you have a version
72:00 - 72:30 control device on top of Cloud so we all have heard about git right so git is nothing but your version control device so using gcp Cloud Source repository you can create a similar Version Control on top of cloud or you can even connect to git to basically import those repositories and then you can pull push your data and work on top of it okay so that is what your Google Cloud Source repository lets you do next we have our Cloud tools for Android Studio where you can do Android development here right
72:30 - 73:00 and then there are other application based tools as well something for your Powershell implementation you have tools for visual studio and plugin for eclipse and other test Labs as well so that is what your developer tools let you do okay so this is how the development and management tools on cloud work let's now go ahead and discuss some of the other tools as well then you have your big data services on top of Cloud now what do your big data services on cloud do okay so there are plenty of services
73:00 - 73:30 here you have your big query you have your data proc Pub up you have your data flow data lab genomix and a number of other services now big query is nothing but your data warehouse that basically lets you analyze your data and process data at a very low latency data proc is again another service that lets you deal with your Apache server Apache Hardo kind of infrastructures now when you talk about big data right we are talking about Hadoop infrastructure this is where we are talking about unstructured
73:30 - 74:00 data being handled and one of the easiest way to put your data there is by using something like a Hardo ecosystem and by using spark that lets you stream process data at a very quick rate so that is where your data proc service comes into picture it basically lets you manage these kind of data on top of your cloud and basically lets you manage your big data activities on top of Cloud then you have your Pub sub and data flow now these are your streaming services so so if you've heard of something like kka that lets you do publish your data and basically lets you to subscribe to other
74:00 - 74:30 topics as well say for example you use Gmail right on Gmail what we do basically we have certain uh emails that we've subscribed to and we get them regularly right you can decide to unsubscribe as well so when you talk about streaming data if you want certain topics to be subscribed to you can do that by using pobs up and data flow so data flow does not let you subscribe but it works on handling streaming data rather and then you have your genomics and data lab data lab again is a tool that handles big data for you and genomics is something that lets you work
74:30 - 75:00 with research kind of data now what do I mean by research kind of data now I'm referring to that data that basically lets you deal with something like covid that we had in 2019 right so post that there were many people wanting to research on genomic data the DN and stuff right to understand how did this virus evolve so that kind of study falls under life sciences and if you are a group of people who want to do it you can do that on cloud all your basic infrastructure and underlying aspects
75:00 - 75:30 will be controlled by Google Cloud platform all you worry about is how do you work around that piece of data that set of data moving on you have something like your gcp machine learning Services now when you talk about gcp machine learning Services we are referring to practicing machine learning on top of gcp Cloud now machine learning as we all know has been trending a lot in last decade and it will continue to Trend because the importance of data has changed widely in recent times and that is why we see people wanting to make a
75:30 - 76:00 career in these domains so can you do or perform data science activities and machine learning activities on top of Google Cloud definitely you can how do you do that you have these set of services to let you practice these things right so first and foremost you have your Cloud machine learning so it is similar to AWS s maker where you can actually go ahead and practice these things like building your python having a python notebook configured readily on top of an instance and then building a model and with something like Cloud machine learning it becomes easier for
76:00 - 76:30 you to deploy those models as well in the public environment or in the production environment other then you have your apis now these are cognitive Services what do I mean by apis or these cognitive services so let's assume that you want to process images you want to understand what is this image specifying right whether there's a person in it there's a celebrity in it does it match with other images and stuff like that so you can use an API like Vision API speech API is something that lets you convert your text to speech speech to
76:30 - 77:00 text and stuff like that natural language apis are something that let you work with your lexical data analysis or basically doing sentiment analysis and stuff like that then you have translation and job apis as well that basically let you translate your data and job apis are something that again deal with data handling in general so that is what these Services let you do let's now go ahead and look at them practically let's see some of these services that we've discussed we've talked about Security Services right we've gone ahead and discussed the Big Data ones the machine learning ones as
77:00 - 77:30 well let's try and understand them practically and see how do some of those work or what can you do with these Services rather okay so I'm going to quickly switch into my console so there you go I've gone back again and now I'm going to go ahead and you see this dashboard here we talked about the management and developer tools as well so this is how a dashboard this is what it simplifies right what kind of data was being used so initially when I started the server at 12:30 today I had a spike there okay so that is gone that is when I started one of my services so
77:30 - 78:00 the CPU utilization rul there so you can see that the CPU utilization and stuff will change here if you create your resources okay you can monitor other resources billing and stuff as well You' get to know what are the resources you can see that there are two storage buckets that are there one compute engine that we created a while logo and one of the buckets that you saw how to create one right so all the information would be available in your dashboards and this can be done by monitoring and logging services that you have okay what are the other things that we can do here let's just go ahead and take a look at those Security Services I am is one of
78:00 - 78:30 those that I just mentioned right so you can create your IM am users here and you can decide all those things who gets to access what so if you click on members roles you can add those can just add users as well say for example I add a user called as Vish here okay forget the spelling it's not important okay I can select what kind of role this individual needs to have okay you have to enter the email address here for that okay so you can add an email address and first that you can decide who gets to access what is it an editor access owner access
78:30 - 79:00 viewer access and I can also add conditions here for durations times days what kind of access and stuff like that you can add more roles here and then you can just say save once you do that you can add them in groups as well you can see that there are members here right so you can create groups as well and then you can add them there all the policy analyzers and stuff you'll find all those things here I mentioned the fact that I'll show you something in AWS as well so you see this is the company's account that we used and I have an I am user here okay you see this is signed in
79:00 - 79:30 by this account but when I member login right if I member login here it will ask me to do a two-step authentication process so when I enter the password it will ask me for a number that would be generated by Google Authenticator on my phone so it's a multiactor authentication process more than one step of authenticating that I'm the right user and since this is a pseudo user that I'm using I do not have admin access to a lot of services say for example if I open S3 here it will not
79:30 - 80:00 let me create a bucket here because I do not have access to that you see you don't have permissions to list buckets so this is what you can control by using your IM am policies and that is what your I am Services let you do so I believe this little bit or this little gist of things is clear to you people okay moving on let's try and understand other things as as well okay there are other services here as well which you might want to take a look at if you say sign into to the console it will directly sign in right now because I'm
80:00 - 80:30 already signed into it if I wasn't it would have asked me for all those passwords there are other services here as well right you have your IM IM billing or basically S3 is something that is similar to the buckets that I've already shown ec2 is something that lets you create instances here okay and then there are a plethora of services that deal in management and monitoring as well so if you scroll down you'll get information about those you see you have VPC here you have Cloud front is one more that should be here or somewhere which is equal to gcp CDN and then you
80:30 - 81:00 have your developer tools okay so you have your code commit which is similar to the Google repositories Cloud repositories that we saw so that is what these Services let you do and these basically let you deal with all these data that is there for you to work with Okay let's now go back to this thing and take a look at other things as well so I believe the security bit is clear management bit is clear to to some extent as well for you people big data services we've discussed that there are a lot of big data services here as well if you scroll down or if you just go
81:00 - 81:30 back here and click on Services you'll see that all the other services here okay so we talked about pops UPS as well right if you remember so these are the databases right the big table data store the no esql databases that we discussed you also have your spanner which is again a database service SQL is something that stores your SQL data monitoring is few things we discussed code build is something that lets you develop your code I've told you you can build your code there whatever pieces of code you have and similarly if you scroll down this pops up right I talked
81:30 - 82:00 about topics subscriptions and snapshots you can do all these things by using these particular set of services here as well okay so in terms of apis as well I've talked about that there are Vision apis but again when you want to use vision apis and stuff like that you need to basically activate those apis which I don't think are activated here right now so I'm Afra I would not be able to show you demo on that but you can do that text to speech is what you can use for that to you need access but you can actually go ahead and enable these apis
82:00 - 82:30 and you can actually go ahead and work on these things practically then okay apart from that there are other things that you can look into some of those are if you go to AWS you have services like poly which lets you convert text to voice and vice versa textract is another one that lets you do that then you have recognition which lets you process images okay so if you come here you can actually go ahead and do all those things right facial analysis whether a person is happy or not male or female right what could be the age you can
82:30 - 83:00 decide face you can do face comparison or celebrity recognition as well so you can compare two images and tell whether they match with each other or not whether this is a celebrity phase and does Google or Amazon know it right so these are the things that you can do with these apis that we discussed so these are few things that you need to look into there quite a few things that you can actually Express when you talk about or explore rather when you talk about Google Cloud what I will also do is as we move further in future content I will ensure that we come up with
83:00 - 83:30 videos that talk about these services in detail and where we'll focus more on each of these Services individually this being a tutorial video I had to cover a lot of content here so we could not discuss all those in detail but in future definitely we'll come up with more individual videos that talk about this content or those services in particular in detail so now let's first understand what infrastructure a service that is Ias is
83:30 - 84:00 so infrastructure as a service are online services that provide high level apis which are used to deference various lowlevel details of underlying Network infrastructure like physical Computing resources location data partitioning scaling security backup Etc a hypervisor which is nothing but a physical host such as Zen Oracle virtual box Oracle VM KVM or vmw runs the virtual machines as guest so pools of hypervisors within the cloud operational system can support
84:00 - 84:30 large number of virtual machines and the ability to scale Services up and down according to customers varying requirements typically IAS involves the use of cloud orchestration technology like open stack Apache Cloud stack or open nebula this manages the creation of a virtual machine and decides on which hypervision to start it which enables virtual machine migration features between hosts also it allocates storage volumes and attaches them to Virtual machines and track usage information for
84:30 - 85:00 billing and more now let's get an overview of Google Cloud's compute engine so Google's compute engine is Google's infrastructure as a service virtual machine offering it allows customers to use Virtual machines in the cloud as server resources instead of acquiring and managing server Hardware Google's computer engine offers virtual machines running in Google's data centers connected to the worldwide fiber Network the the tooling and workflow offered by the computer engine enables scaling from single instances to Global Google computer engine enables users to launch virtual machines on demand
85:00 - 85:30 virtual machines can be launched from the standard images or custom images created by users the Google computer engine users must authenticate based on o 2.0 before launching the virtual machines so what o 2.0 here is so if you see o is an open standard for Access delegations commonly used as a way for internet users to Grant websites or apption access to their information on other websites but without giving them the passwords the mechanism is used by companies such as Amazon Google Facebook
85:30 - 86:00 Microsoft Twitter to permit users to share information about their accounts with third party applications of websites now going back to Google compute engine which can be accessed via the developer console or sful API or command line interface now let's have a look at some of its applications so first one is a virtual machine migration to computer engine so what it do is as you can see in the diagram also how it works so I provides tools to Fast Track the migration process from on premise or other clouds to Google Cloud platform if a user like is starting with the public Cloud then they can leverage these tools
86:00 - 86:30 to seamlessly transfer existing applications from their data center or AWS or AO to Google Cloud platform users can then have their applications running on computer engine within minutes while the data migrates transparently in the background that's how virtual machine migration works then we have genomics data processing as you can see in the chart that how it works so processing genomic data is like computationally intens process because the information is enormous with the vast sets of sequencing so with the computer engines potentials users can process such large
86:30 - 87:00 data sets so what it do is it processes pabes of genomic data in seconds with compute engine and high performance Computing s solution so Google Cloud's engines is scalable and flexible infrastructure enables research to continue without disruptions okay also like like competitive pricing and discounts help you stay within the budget to convert ideas into discoveries or hypothesis into CES and also so like Inspirations into products then we have byol also known as bring your own license images so in this how the normal
87:00 - 87:30 host and then we have sold in it node you can see how this chart is given for its working so what it do is a computer engine can like help you run Windows apps and Google Cloud platform by bringing their licenses to the platform as either license included images or Soul tent images as shown so after you migrate to Google Cloud optimize or modernize your license usage to achieve your business goals take advantage of the many benefits available to Virtual Machine instances such as reliable storage options the speed of the Google Network and also like Autos scaling now
87:30 - 88:00 let's look at some of the key features of Google computer engine so first is machine types it describes the virtual Hardware that is attached to an instance which also includes RAM and CPUs there are like two types of machines first is a predefined and second is a custom machine types so predefined machine types are like there are preconfigured virtual machine templates that can be used to set up the virtual machines the configurations have been pre-optimized by Google and like meet most of the requirements so the predefined machine types are further divided into four subcategories so they are like standard
88:00 - 88:30 virtual machines which are like balanced between processing power and memory and then we have high memory virtual machines in this like emphasis is put on memory over processing power for tasks that needed accessible nisk storage quickly then we have high CPU virtual machines so high CPU usage for like high intensity applications that require processing over memory then the fourth category that is a shared Co virtual machines so if you see a single virtual CPU backed by a physical CPU that can run for a period of time these machines
88:30 - 89:00 are like not for use cases that require an ongoing server significant power so the second main category under machine types is a custom machine types in this the virtual machine can be configured manually for a computer engine virtual machine instance so users can like select the number of CPUs and memory provided they are within Google's set limit so the second one is a local SSD so Google computer engine offers always encrypted local solid state drive block storage which is physically attached to
89:00 - 89:30 the virtual machine learning it it improves performance and also like reduces latency now the third one is forend disc so these are durable high performance block storage for virtual machine instances which can be created in hard disk or SSD formats so users can take snapshots and create a new persistent disc from the snapshot if a virtual machine instance is terminated the data is retained by the persistent dis which can be attached to another instance there are two types of persistent first is shared second is SSD then we have GPU accelerator so gpus are added to accelerate computationally
89:30 - 90:00 intensive workloads like machine learning or virtual workstation applications Etc also the fifth one is images so an image contains the operating system of the root file system that uses leverage to run virtual machine instance so Google Cloud platform provides two main types of images first one is Public Image and second is custom images so public images are like collection of Open Source and propy options this is a starting point for most virtual machine instances and come packaged with only the operating system the second one is a custom images
90:00 - 90:30 so public images if you see are a good starting point but they are designed to be built upon and turned into custom images to match the needs of the customers customer image has the software needed along with all the scripts necessary for the instance to work automatically without administrator intervention these are automatically brought up and shut down for load balancing or recovery needs so the last one is global load balancing so it helps in distributing incoming requests across pools of instances across multiple regions so that users can achieve
90:30 - 91:00 maximum performance through port and availability at a cost similarly there are many other features like Linux and Windows support container reservations OS patch management live migration for virtual machines and many more Google computer engine has many Pros such as the input output like access like Smooth integration with other Google services and few cons as uh like most components are based on proper Technologies and the choice of programming languages is limited so uh now that you know the applications and features of Google compute engine let's now look at some
91:00 - 91:30 major advantages of it okay so the first is storage efficiency so the persistent is support up to 257 terabyte of storage which is more than 10 times higher than what Amazon elastic block storage can accommodate the organizations that require more scalable storage options can go for computer engine then we have cost as it is cost effective So within the gcp ecosystem users pay only for the Computing time that have consumed so the per second billing plan is what used by the Google Compu then we have stability
91:30 - 92:00 Google computer engine offers stable Services because of its ability to provide live migration virtual machines between the host also Google Cloud platform has a robust and inbuilt and redundant backup system so the comput engine uses this system for Flagship products like search engine and Gmail also coming to security so Google computer engine is a more secure and safe place for Cloud applications so now that you have a theoretical understanding of Google computer engine let's practically try our hands on it so you can just directly go to Google Cloud
92:00 - 92:30 platform let's open it first let's go to the documents part so in this also you can open the console also this one this tab only you can open okay so for support purpose you can just go through this documentation part though I'm going to explain you still if you need any support you can uh go to this Compu part and here the compute engine is given from here you can like much more understanding of Google compute engine so let's come back here we have opened the console this is how the Google Cloud
92:30 - 93:00 platform dashboard looks like we can either go from here to computer engine from here we can open or you can just uh search here compute engine okay so these are the virtual machine instance templates so these are different kind of virtual machines are given then we also have storage under that we have just a minute we have this snapshots or images as I've told you like there are buil-in images also and there are like custom images also so here's the discs these are the dis which
93:00 - 93:30 already been created and uh then we also have snapshots one of the snapshot is also there so then we have images so these are the built-in images you can use any of them or if you want to make your own custom type images you can create image from here okay now let's finally go to Virtual Machine instances and let's see how we can launch an instance so these are the two inst which I've already created so let's go and create a new one okay so you can name your instance here and here's whatever configurations
93:30 - 94:00 you're going to give now the price will change okay like I can show you just for showing purpose I'm just showing you if you change it to four virtual CPUs see the price has changed right and again go to the small one okay so you can also add label here and give it key environment can give testing okay okay or you can add much more label than okay you can add more labels like P or you can give the value for it like app okay
94:00 - 94:30 you have to do that and then you can like save it okay then we have like see I'm changing the configuration that why the price is changing okay and then we have to see the region also under that region and Zone you have to select and under that we have different Services okay like right now the under this the service are there general purpose compute optimize memory optimize and GPU okay like you can see in this E2 series we have for general purpose we have series and we have machine types of 4 CPUs and 8GB memory and all this like
94:30 - 95:00 that then we also have compute optimized also under this uh we have 4 C 16 GB and then 8 CPUs we have 32 gig or 16 CPUs we have 64 gig these kinds of CPUs and memories given and if you come to memory optimized these are the large ones they the ultra level ones so there are like 96 CPUs and 1 14. TV of memory also 40 CPUs and 961 GB memory and then we also have gpus for it for different machine types we have these kind of G for 24 virtual CPUs 170 GB memory and this way
95:00 - 95:30 it is given but remember that you don't get it in every region in zone like I can show you right now it is selected as us Central 1 and US Central 1 a right and if you change it to Europe West okay let's see if you change it to Europe West see now you can see GPU is already gone if we see for C1 now even the memory Optimizer also gone so that's how it works okay so let's go to the default one only which we want yeah okay then we have boot dis also you can change this
95:30 - 96:00 boot dis also like you can go to change you can select a Public Image or anything for different one you can use this and then has 07 okay let's go do 50 also we can do that and then we can select it okay or we can go to custom images if you want to use any custom image of yours you can use it here if you have any step shots taken you you can also use snapshots here also if you have an existing dis or like if you have made some existing disc from the previous virtual machines then you can use it but right now we don't have one
96:00 - 96:30 also I've shown this so you can select it here okay you can change it again can just go to Debian only okay L is selected balance persistent dis or SSD persistent which one you can select give it 10 only and just select it this is the default setting we have done again and then you can come to this management security this network youro tendency can just select let's come to the networking part if you already have a network tag or something you can select it here also you can change the host name or whatever host name you have created you can give a different one also okay and then we have
96:30 - 97:00 diss it's like if you have creating virtual machine you can if you delete a virtual machine also you can retain your disk also there is an option here delete boot disk when instance is deleted so you can just deselect it so if virtual machine is deleted then also a disk will be like retailed again doing it so let's create it well I think there is a problem there some problem virtual machine strance and it's boot this instance yeah okay they
97:00 - 97:30 don't support what you can do is this is something there are certain limitations of I want to say for free trial because this is a free trial okay so you can buy one or you there are certain limitations you have to follow those limitations so you can like simply create a default one with default settings you can like right now I'm not doing any customizations and everything so you can just create that for now so it will take a few time because I've changed some settings now that's why it wasn't able to run and there are
97:30 - 98:00 certain limitations I hope you understand yeah now it's been created also let me tell you this like suppose if you have created this instance and you have working on it you have created this on server we have created this machine I mean and you're working on it for a long time you worked and then suppose a teammate comes and he sees that and he feels like okay this this is a lot of mess up so he deletes the instance now what happens is it will be like oh your work is gone all the work you have done is gone but what you can
98:00 - 98:30 do is or you can go to dis only okay so this is the instance to what you can do is you can create a snapshot from here okay or you can just snapshot is created it will be created here okay like another snapshot is here for instance one if you created it for instance two it will be created here itself so while creating the instance what you can do is if you have taken the snapshot okay so what you can do is from here you can change it and you can go and select a snapshot from here okay like right now
98:30 - 99:00 snapshot is given now for instance one it is given if you use it all your work will be retained so that's how it works also you can like you want to delete an instance you can just uh go here and delete the instance from here it will take a few seconds yeah then strance got deleted I hope you have understood now this is the basic [Music]
99:00 - 99:30 demo what is WordPress WordPress is a web publishing software you can use to create a website or a Blog technically could be defined as a free and open source content managing system written in PHP and paired with mySQL of Maria DB database it can be used to not only create blogs and website but can also be used to create directory forums galleries Business website online e-commerce website and many more it is the most popular website building platform in the world just to give you an idea about how
99:30 - 100:00 popular WordPress is WordPress Powers about 35% of all internet websites bloggers small businesses and Fortune 500 companies use WordPress then all the option combined now you do not need any coding knowledge to use WordPress it enables you to build and manage your own full featured website only by using your web browsers now what makes WordPress so famous to answer this let us look at some of its features the first feature is it is simple and easy to use creating
100:00 - 100:30 content with WordPress is as simple as creating and using Ms Word document you can create post and Pages format them easily insert media and with a click of a button your content is live and on the web and also WordPress is available in more than 70 languages so you can choose and create content with the language you're most comfortable with the next feature is it is flexible now with WordPress you can create various types of website like a personal blog a website a photo blog a Business website
100:30 - 101:00 a professional portfolio a government website a magazine a News website an online community and many more you can make your website visually pleasing using themes and extend it with plugins with WordPress you can also build your very own applications the next feature is user management what WordPress uses a concept of roles which are designed to give the site owner the ability to control what users can and cannot do within the site the site owner can assign different roles to different set
101:00 - 101:30 of users generally WordPress has six predefined roles the super admin administrator editor author contributor and subscriber now the super admin role allows a user to perform all possible capabilities or functions the administrator manages the site editor works with the content author and contributor write the content and subscriber has only the read capabilities this lets you have a variety of contributor to a website and let others simply be a part of your
101:30 - 102:00 community the next feature is you can extend it with plugins WordPress comes packed with a lot of features for every user for every feature that is not in the WordPress core there is a plug-in directory with thousands of plugins you can add complex galleries social networking forums social media widgets spam protection calendars fine tune controls for search engine optimizations and forms these were just some of the plugins which you could use at WordPress the next feature is it has a easy theme
102:00 - 102:30 system you can select from thousands of themes to create a beautiful website from the theme directory by default WordPress comes bundle with three default themes but you can upload your own themes with a few clicks Now it only takes a few seconds for you to completely customize your website the next feature is search engine optimization WordPress is optimized for search engines right out of the box if you want more fine grained SEO controls there are plenty of SEO plugins to choose from now these were just some of
102:30 - 103:00 the features of Wordpress now let us move on to our next topic and see the steps to host WordPress on gcp the first step would be to sign in into your Google Cloud console if you're new to Google Cloud you can just sign up for an account by providing your address and your credit or debit card details it is a very simple process and it won't take you long next after that you have to first create a new project in the Google Cloud you will find first project on the top left corner from there you can create your project the next step is selecting a WordPress instance for this
103:00 - 103:30 you have to go to the navigation menu which is on the left hand side and under that select marketplace now Google Cloud Marketplace will allow you to quickly deploy functional software packages that run on Google Cloud even if you're not familiar with services like compute engine or Google Cloud Storage we can start up with familiar software packages without having to manually configure the software or the virtual machine the storage or network settings next you have to select your WordPress instance from the gcp marketplace now there are various deployment of Wordpress in gcp I
103:30 - 104:00 will show you in the demo part so in today's session we'll select WordPress certified by bitnami because it is quite simple and straightforward to install you will find this under blocks and CMS column next it will ask you to configure your WordPress in sense you can make changes according to your convenience after the configuration is done you can simply click on deploy and after a few moments your WordPress website is deployed on Google Cloud platform but this is not the end your site is only accessible via an IP address so you'll have to map a domain to the IP address
104:00 - 104:30 this step is important because if somebody has to access your website they will prefer to enter the domain name to a website rather than the IP address you can just register for a new domain name if you do not have one and Link it to your WordPress website next you have to set up an SSL certificate which stands for secure socket layer now this is a type of digital certificate that provides authentication for a website and enables an encrypted connection this step is not mandatory but it is recommended now let us move on to our next topic and see some of the benefits
104:30 - 105:00 of Hosting WordPress on Google Cloud platform the first benefit is uptime businesses such as big e-commerce stores trading sites and new sites rely heavily on optimal server up times they would want the servers to be up and running always because even with a slight interruption in the service it can cause them a lot of financial damage but Google Cloud engine is available for more than 99.9% of the time so companies can be assured they won't have this problem next it is simple to deploy as I've told you in the previous topic how
105:00 - 105:30 simple it is to deploy WordPress on the Google Cloud I will also show you how simple it is it also gives you complete Liberty to make changes to any of your root files with gcp hosting you will get high performance consistently no matter how much traffic you receive the third benefit is reliability Google Cloud engine uses the same infrastructure as other Google apps like Gmail and YouTube which means your website is hosted on the most well-maintained Hardware which is controlled by Google so you can be assured they would not be much downtime to your website Google constantly works
105:30 - 106:00 on improving the services so they can provide a better customer experience the next benefit would be scalability Google Cloud engine servers are highly scalable and can handle unexpected traffic spikes with ease so imagine there is a peak time and a lot of users are trying to access a website now as a website is hosted on the Google Cloud it will scale its servers up in order to match the incoming traffic with gcp you can also upgrade or downgrade your server size without changing the IP address now these were some of the advantages of
106:00 - 106:30 hosting a WordPress website on gcp let us move on to a demo part where we'll host a WordPress website on the Google Cloud platform so for our demo I've logged in into a gcp account it is very simple to create a gcp account all you have to do is enter your debit card or your credit card detail and your address then you might be charged maybe one rupee but even that will be refunded later now as you sign in into a new account gcp will provide you $300 free credit now you can use this $300 to explore Google cloud
106:30 - 107:00 services you won't be charged until you choose to update and it will be valid for 90 days so the first step is creating a new project so we go to my first project over here I will just select a new project from here we can name a project anything so let us name it demo we just create it now you can see our demo project is created now let us move on and select our WordPress instance so for that we'll go to navigation menu now here you have
107:00 - 107:30 something called a marketplace now Marketplace will allow you to quickly deploy functional software packages that run on Google Cloud so go to Marketplace here you can see there are a few WordPress instances but in today's session we're going to use WordPress certified by bitnami and automatic but let us take a look at at WordPress Google click to deploy so we select this here you can see the overview of the instance and it details the type of virtual machine the version the
107:30 - 108:00 operating system and the packages it contains here when we go to pricing you can see how much will it cost you per month so it will cost us 2751 rupees per month now let us go back and see WordPress by bitnami it'll be under blocks in CMS so we'll just click on this so here is our overview of our instance its details and the pricing and you can see the pricing is 1,9 rupees it is way more
108:00 - 108:30 cheaper and it is very simple and straightforward to deploy it also comes with a lot of pre-loaded packages which are very helpful for a WordPress website so now we will go ahead and launch it now we have to configure our instance first we have to name our deployment so we'll just name it WordPress demo next we have to select a region so for this if you're using a free tire account you should select a particular region only if you go to the GCB free tire page we can launch a F1 micro
108:30 - 109:00 virtual machine for free only in this region which is Oregon Us West one or laa us Central 1 or South Carolina which is Us East one so we'll go back and we'll just select us east1 now you can select anything from b c d these are just the zones available we just just select this next we have to select the machine type so for this demo I won't be needing too much of compute capacity now here the default is small with one shared
109:00 - 109:30 virtual CPU and 1.7 GB memory but for this demo I'll go with micro where I get one shared virtual CPU and 6 GB memory you can select a compute capacity according to your website needs now the micro machine tab will only cost me $513 per month next we have boot disk the boot disk type can be either standard persistent or SSD persistent or balance persistent so let us just go with the standard persistent disk and
109:30 - 110:00 the boot dis size be 10gb we'll keep the networking at default itself next we'll select both HTTP and https traffic from the internet this basically means allowing Network traffic to your website this means anyone with the access of internet can visit your website we'll accept the term and condition and and just click on deploy now you can go back and check how much would it cost you per month for me it would cost $513 per month so just go ahead and deploy
110:00 - 110:30 it it will just take a few minutes for Google to deploy your WordPress website during the process software scripts is run WordPress is configured the username and password for your WordPress account is generated now you can see our WordPress website is successfully deployed now let us log into a WordPress website we'll just click C on admin URL our username is user just copy the password from here and we
110:30 - 111:00 login and now we are in a WordPress website now let us just post something go to post add new we'll just type WordPress on gcp and we publish it now let us view our website side here WordPress on gcp now going back to a Google Cloud platform you will see the IP address which we used to log in will keep
111:00 - 111:30 changing every time I restart my virtual machine now we have to make that static so for that we'll just go to navigation menu and we'll select VPC Network and from here we'll go to external IP addresses now the type here is epimeral we'll make it static we just name it I will just Reserve this now the IP address will be static and it won't change every time I restart my virtual machine so this was today's demo but after this you still have to link a
111:30 - 112:00 domain name to a static IP address and also set up a SSL certificate now a SSL certificate is a bit of code on your web server that provides security for online [Music] communication what an app engine is so let's have a little introduction to app engine it is a fully managed servess platform for developing and hosting web applications at scale you can choose from several popular languages libraries and Frameworks to develop your apps then
112:00 - 112:30 let app engine takes care of provisioning servers and scaling your app instances based on demand now let's understand app engine service provided by Google Cloud platform that is Google app engine if I talk about Google then we all know that it provides an enormous range of tools products and services in the running Market Google has scored High percentile and left the footprint in the list of world's top four companies so Google app engine by the name only we can recognize that Google
112:30 - 113:00 has created an App engine the name is similar to a search engine but it uh purpose is of course different app engine is a service and cloud computing platform employed for developing and hosting web applications it is a platform as a service cloud computing platform that is entirely manag and utilizes inbu services to drive the apps once after downloading the SDK that is software development kit you can instantly start the development process but for this it is mandatory to use technical knowledge if you don't know the technical terms then there is no
113:00 - 113:30 need to take tension okay as there are many it Industries in the market that are providing Google app engine development services app engine lets you build highly skillable applications on a fully managed serverless platform also you can scale your applications from zero to Planet scale without having to manage infrastructure okay also you can free up your developers with zero M server management and zero configuration deployments you can even stay agile with support for popular development languages and a range of developer tools
113:30 - 114:00 with Google app engine now let's look at some key features of Google app engine if you see the programming languages the platform supports PHP C Java python go nodejs n and Ruby applications and apart from this it also supports other programming languages through custom run times the app engine serves 350 plus billion requests per day now if you see like how Google app engine is actually open and flexible so so custom run times in Google app engine allows you to bring
114:00 - 114:30 any library and framework to app engine by supplying a Docker container so you can customize run times or provide your own run time by supplying a customer Docker image or Docker file from the open source Community then you can see like Google app engine is actually fully managed so Google app engines fully manage environment which makes it easy to build and deploy an application that runs reliably even under heavy load and with large amounts of data and which lets you focus on code while app engine manages infrastructure concerns let's
114:30 - 115:00 now see the architecture of Google app Eng so this is how a simplified architecture looks like among the main services and structures available are Google load balancer which manages the load balancing of the applications then we have front end app which is responsible for redirecting request for appropriate Services then we have M cach that is a cat memory shared between instances of Google app engine generating high speed in the availability of the information on the server and task use is used which is like uh task cues if you see that is a
115:00 - 115:30 mechanism that provides redirection of long task to backend servers making front-end servers free for new users requests in addition Google app engine also has static and dynamic Storage Solutions the static storage solution provides the file storage service called cloud storage whereas the dynamic storage solution provides relational data services such as cloudsql and no relational nosql such as cloud data store now let's see the development cycle of Google app engine here if you see test build and deploy is the
115:30 - 116:00 software development kit means SDK so SDK is a set of software development tools that allows the creation of applications for a certain software package software framework Hardware platform computer system video game console also operating system or similar development platforms the next one in the cycle is manage which is an app engine Administration control and then we have upgrades like all the upgrades are being provided for deployment to software development kit now let's uh look at the components of an application the app engine application is created
116:00 - 116:30 under your Google Cloud project when you create an application resource the app engine application is a top level container that includes the service version and instance resources that make up your app when you create your app engine app all your resources are created in the region that you choose including your app code along with a collection of settings credentials and your apps metadata each app engine application includes at least one service the default service which can hold many versions depending on your app Billings status the following diagram
116:30 - 117:00 illustrates the hierarchy of an app engine application running with multiple services in the diagram the app has two services that contain multiple versions and two of those versions are actively running on multiple instances so let's understand service inside is so you can use services in app engine to factor your your large apps into logical components that can securely share app engine features and communicate with one another generally your app Engine Services behave like microservices
117:00 - 117:30 therefore you can run your whole app in a single service or you can design and deploy multiple services to run as a set of microservices for example an app that handles your customer request might include separate services that each handle different tasks such as API requests from mobile devices internal Administration type request backend processing such as billing pipelines and data analysis each service and app engine consists of the source code from your app and the corresponding app engine configuration files the set
117:30 - 118:00 of files that you deploy to your service represent a single version of that service and each time that you deploy to that service you are creating additional versions within that same service then we have versions having multiple versions of your app within each service allows you to quickly switch between different versions of the app for roll backs testing or other temporary events you can root traffic to one or more specific versions of your app by migrating or splitting traffic then we have instances so the versions within your services run on one or more
118:00 - 118:30 instances okay by default app engine scales your app to match the load your apps will scale up the number of instances that are running to provide consistent performance or scale down to minimize idal instances and reduces costs for more information about instances see how instances are managed okay I will explain you that in the demo so now that you have a theoretical understanding of Google app engine let's Implement a simple app through Google app engine so you can just go to Google
118:30 - 119:00 Cloud platform open it we have console and we have documents also open the console also yeah remember that if you don't have an account on Google Cloud platform it's a very good platform to have your account on just create your account it will ask for your some basic details of your name and phone numbers and address and it will also ask for your credit and debit card details it will just deduct one rupee and that will also be refunded within a while and free trial you will get $300 credit for 90 days and you can use that
119:00 - 119:30 credit for like I'm showing you some exercises inside app engine you can perform those exercises using those credit okay so you can just you have open the dashboard and let's go to the documents part inside this we have to go to the compute product under compute we have app engine so here it is compute app engine then go to standard environment to python we are going to implement a simple app of hello world using python quick
119:30 - 120:00 start these are the table of contents for how we are going to implement the app okay so inside this the first step is you have to create a project in Google Cloud platform this is how the dashboard of Google Cloud platform looks like and you go here this is the demo means it's written because it's the name of my project these are the number of projects are been here but you can also go to new project and create okay so let's go to here second step is remember when you are creating the project ensure
120:00 - 120:30 that billing is enabled for that okay then third step is just open it from here enable the API open this and just enable the API then we just remember the next step additional prequisites it's not required okay so yes then we go to the Hello World app so what we have to do is we just have to go to the app engine here remember I explained you the application components inside which we have Services versions and instances this is the same thing when you create an app the services will be provided and
120:30 - 121:00 the versions will be created and also the instances will be created for that so what we have to do is just open the cloud shell from here activate Cloud shell so what we have to do is go to the documents part remember this these steps are here download the L World app yeah how we have to implement it you have to copy this you have to clone a GitHub repository okay inside which the hello world program is already present so yeah just have to clone it so just paste it here just enter then we can go to the
121:00 - 121:30 editor here so yeah we have to go to the hello world program so we have to go to the python doc samples then we have to go inside this to app engine then to flexible and then the hello world program where is uh yeah hello world yeah in hello world we have main.py yeah this is the Python program all the we are going to use flas for it
121:30 - 122:00 so using flas we are creating this we don't you don't have to code off it we can just use it uh like this only this is just a simple demo and then we have app. yml file with all the specifics are being given here so what you can do is you have understood the path here okay so we can just go back to the terminal the repository we have already cloned what you can see here is that the repository is already cloned but the thing is when you're doing when you're going to do it for the first time now
122:00 - 122:30 the directory won't be existing so it will clone and it will take a little time so it will get cloned okay mine is already showing that it already exist because I have already tried the demo here okay so yeah so what we can do is we can go to we can type here for we have to input the path right so for inputting the path we can just type LS then CD what was the location python Docs app engine the further path will be given by
122:30 - 123:00 again LS CD flexible sorry hello then we just have to deploy the app so just give the command gcloud app deploy just authorize it what so what happened here is it's asking for me to continue but if you are going to do it for the first first time it will ask you to select your project like my project name is demo with the billing enabled so remember that you have to there will be a certain list of projects so you have to select one project and then there will be another option of if you like
123:00 - 123:30 press enter after that by selecting the project it will ask for the region so you have to select a region okay like my region is Asia South so you have to select your region and then it will ask for you to continue so you just have to type the Y here it doesn't us take this long but uh today I there's a little server problem that's why it's taking this long I hope you have understood till now the like the theoretical part also and also the
123:30 - 124:00 like this implementation also till now remember those quotes which I have told you right for the path one and also for the cloning one cloning one will be given for the path one you have to remember that okay so you can see here uh like uh services and uh versions and stances are given when you create this project a service will be created and uh yeah this a single service is been created and said that the versions will be created so I have created a new version in that like this one through Cloud shell I have created a new version this one is is
124:00 - 124:30 just serving right now right the two has stopped this I have already run ran before this one is the ours 2021 this 5441 this is the version we have created okay so in instance this is not showing right now but this shows when you it's get implemented now I didn't when this app starts running it shows all the information regarding that okay so yeah dasb so yeah the service is updated yeah it's almost done
124:30 - 125:00 now yeah so what you can do is yeah here is the link can just copy the link go to the yes we got it hello white [Music] right what exactly is Google Cloud anos anos is a hybrid and multicloud application modernization platform it was launched in June 2019 and can help in rapidly building hybrid and multicloud
125:00 - 125:30 application without compromising on security as well as not increasing the complexity now if you're wondering what antos can be used for here are a few cases where you can use antos firstly it helps in provisioning infrastructure in both cloud and on premises as well next it provides infrastructure management tooling security policy and compliance Solutions it can also be used for streamlined application development service management and workload migration from on premises to the cloud
125:30 - 126:00 next one of the core functionality of fusing anos is to easily deploy container based application in a hybrid or multicloud environment in a easy and consistent way clients can choose from various deployment options such as on premises bare metal Google Cloud platform platform AWS or kubernetes clusters Google Cloud anos allows developer to focus on Innovation and create new features software or products for the company instead of spending time
126:00 - 126:30 on managing complex hybrid environment this will be taken care by Google Cloud anos now I guess you have some idea about what Google Cloud anos can be used for now in order to understand Google Cloud anos better let us take a look at its components basically anos is a platform composed of several technology integrated together rather than a single product it is powered by kubernetes along with other Technologies like Google kubernetes engine Google kubernetes engine on Prem sto service mesh and others now let us talk about
126:30 - 127:00 each of these components one by one now the Google kubernetes engine and Google kubernetes engine on Prem are the main Computing components which enables anos now GK is nothing but a manage environment for deploying managing and scaling containerized application using the Google infrastructure now if a company already have the data center or the it infrastructure they can use GK on Prem which will provide them all the benefits of gke like auto updates Auto node repair and many more
127:00 - 127:30 then to connect the on premises data centers and workloads on gcp there is Google Cloud interconnect now Google Cloud interconnect is a service which provides direct connectivity between on premises data centers and the workloads on Google Cloud platform with consistent latency and high bandwidth now Google Anto service mesh enables fully managed service measures for complex microservices architecture which would include traffic management mesh tretry and securing service Communications and with anos config management you can
127:30 - 128:00 create configuration which will allow you to easily and consistently manage your resources globally across clouds and data centers next the GK connect allows you to register GK on Prem based clust to the gcp console this can help in securely managing the resources and workloads running on them together with the rest of the GK clusters this can be enabled by installing the GK connect agent organization can simultaneously migrate the virtual machine applications to Google kubernetes Engine with anos
128:00 - 128:30 migrate so the apps can be run and managed through gke with ISO service Miss capabilities so now I guess you have some idea about Unos components and what is antos let us move on to next topic and see some of the features of Google Cloud anos the first feature is Google Cloud anos integrates security into each stage of application life cycle from developing to building and running anos enables defense in-depth security strategies with a comprehensive portfolio of security control across all
128:30 - 129:00 the deployment models the next feature is it offers a fully managed service mesh with built-in visibility Google Anto service mesh unburdens the operational and development team by empowering them to manage and secure traffic between Services while monitoring troubleshooting and improving application performance the next feature is it provides container orchestration and Management Service Google Cloud anos enables you to run kubernetes clusters anywhere in both cloud and on premises
129:00 - 129:30 environment anos can also run on your existing virtualized infrastructure and bare metal servers without a hypervisor layer it simplifies your application stack reduces the cost associated with licensing a hypervisor and decreases time spent learning new skills the next Fe feature of Google cloud antos is serverless Computing anos provides a flexible serverless deployment platform called Cloud run for anos which allows you to deploy your workloads to antos Cluster running on premises or on Google
129:30 - 130:00 Cloud all with the same consistent experience Cloud run for anos is powered by native which is an open source project that supports serverless workloads on kubernetes the next feature is migrating existing workloads to Containers you can use the migrate for Anto service that minimizes the manual effort required to move and convert existing application into containers with migrate for anos you can easily migrate and modernize your existing workloads to containers on a secure and
130:00 - 130:30 managed kubernetes service these were some of the features of Google Cloud anos now let us move on to next topic and see some of the benefits of using Google Cloud anos the first benefit is it provides various business benefits now according to Forester research anos report which was commissioned by Google it was found that overall anos business benefits include operational efficiency developer productivity and security productivity on an average organization saw 4.8 times return of investment within 3 years of adopting the anos
130:30 - 131:00 cloud platform the developers can use this platform to quickly and easily build and deploy existing container based application and microservices based architectures they can use git compliance management and cicd workload for configuration as well as code using unto configur configuration management it also supports for Google Cloud Marketplace to easily and quickly deploy functional software packages or products into clusters the next benefits of anos is it provides enhanced security anos
131:00 - 131:30 protects apps with high standard for reliability avability and vulnerability from a structur perspective anos also offers a high level of control and alertness for your services health and performance with a comprehensive view now these were just some of the benefits of Google cloud anos and let us take a look at the pricing of antos anos charges supplies to all manage anos clusters and a based on the number of anos clusters virtual CPUs charged on an
131:30 - 132:00 hourly basis there are two types of pricing option for anos the first one is pay as you go pricing model where you are built for Anto manage clusters as you use them now if you want to try Google anos or use it infrequently you can choose pay as you go pricing model the next type of pricing option is is a subscription pricing which provides a discounted price for a committed term your monthly subscription covers all Anto deployment irrespective of environment at the respective billing rates any usage over your monthly
132:00 - 132:30 subscription fees will show as an overage in your monthly bill at the pay as you go price listed here now you can see in the image over here there are three payment option first one is pay as you go in an hourly basis the next one is pay as you go in a monthly basis and the subscription for month L and you can see the rat for each virtual CPU in various deployment models such as Google Cloud AWS multicloud on premises VMware and on premis of bare metal the good news is if you're a new anos customer
132:30 - 133:00 you can try anos on Google Cloud for free up to $900 worth of usage or for a maximum of 30 days whichever might happen earlier now during this trial period you're only build for the applicable fees and then credited at the same time for those fees up to $900 but still built for the applicable infrastructure usage during the tral this was about Google Cloud onto pricing now let us move on to the next topic and see a case study on Google Cloud anos so for our case study we'll be talking
133:00 - 133:30 about UPC pska now UPC pska is a polished telecommunication arm of Liberty Global Europe which offers cable television broadband internet and other services to roughly 1.5 million customers in Poland the problem they faced was they needed to balance their existing it infrastructure which took them two decades to build with a faster and more flexible infrastructure so when they were looking for the solution they decided to opt for hybrid it which would give them the speed to market the needed
133:30 - 134:00 as well as maintain the existing infrastructure which they value now deciding about the hybrid approach they thought antos was the best solution for the company's specific needs because of the consistent experience across environment agility enabled by modern cicd and the ability to set policy and ensure security at scale then they partnered with asinger and focused on the cultural and organizational element involved in rolling out a new solution when they opted for anos it provided them with the following benefits their various team could focus on a core
134:00 - 134:30 responsibility rather than infrastructure management for example developers could focus on writing greater codes while the operational team could use anos to effectively manage and run those applications anywhere also the on premises nature of the company's existing infrastructure made scaling and gener maintenance Difficult by running ANS in the data centers the company gained the fully compliant kubernetes experience necessary to avoid cluster orchestration and management issues which included managing and scaling the
134:30 - 135:00 containers they also improved the scalability and resilience through containerized GK clust on antos now this was the case study on Google Cloud [Music] anos what is Google Cloud's virtual private private Cloud Google Cloud virtual private Cloud provides Network functionalities to compute engine virtual machine instances such as Google kubernetes engine containers app engine flexible environment and other Google Cloud products which are built on
135:00 - 135:30 computer engine virtual machines basically VPC provides networking for your cluster Based Services that is global scalable and flexible now Google VPC is quite different from the VPC of other cloud service providers now in the traditional VPC or the VPC provided by other cloud service providers like AWS the architecture would look something like this now here in the first diagram we can see that there is two VPC built with two different subnets in two different region which are Us East and US West now the virtual machine in one
135:30 - 136:00 region can access the internet and communicate with the other virtual machine only through the VPC Gateway which acts as an interface in the traditional VPC one virtual machine cannot directly communicate with the other virtual machine now in the Google version of the virtual private Cloud it is a global construct which means instead of creating a VPC in US West and the other one in US east region you just create one VPC and put the Subnet in different region within that VPC now in this case the virtual machine present in
136:00 - 136:30 one region can directly communicate with the virtual machine in the other region without the help of the VPN Gateway now the communication between the virtual machines is handled by Google underlying Network this is the same network that Google uses for its search engine YouTube Gmail and its other applications now the Google version of VPC can be very helpful let's say for a large project you use the traditional approach then you have to build multiple VPC and multiple gateways which would be very hard to maintain and to keep a track of now with Google VPC you just have to
136:30 - 137:00 create one VPC and a Gateway and can create multiple virtual machines in multiple subnets it is much simpler and easy to maintain also if something goes wrong with the traditional Network infrastructure it would take a lot more time and cost to identify and resolve the issue in Google VPC there are fewer Network construct to break and troubleshoot this would help in identifying the problem faster and solving it now let us understand VPC networks you can think of VPC Network the same way as a physical Network
137:00 - 137:30 except that is virtualized within the Google Cloud a VPC network is a global resource that consists of a list of regional virtual sub networks in data centers which are called as subnets and all these are connected by a globalwide area network also VPC networks are logically isolated from each other in the Google Cloud now some of the functionalities offered by Google Cloud VPC networks are it provides connectivity for your computer engine virtual machine instances including Google kubernetes engine clusters app engine instances and other Google Cloud
137:30 - 138:00 products built on computer engine virtual machines it offers built-in internal TCP UDP load balancing and proxy system for internal https load balancing it can also help in connecting to on premises network using cloudvpn tunnels and Cloud interconnect attachments it distributes traffic from Google Cloud external load balancer to the back end now to understand VPC Network better let us take a look at its architecture now here you can see we have two regions Us West one and Us East
138:00 - 138:30 one in a VPC Network now a region is nothing but a specifical geographical location where you can host your resources and a region can have three or more zones for example us one region has three zones Us East 1 a Us East 1B and Us East 13 now top talking about zones zones are independent of each other they have completely separate physical infrastructure networking and isolated control planes this is to ensure that typical failures event only affect that zone now coming to subnets a subnet or a
138:30 - 139:00 subnetwork is a segmented piece of a larger Network the virtual machine instances can be created in the subnet and the instances can communicate with each other in the same VPC network using the private IP addresses here you can see there are two virtual machines in Us East subnet and there are two virtual machine in US West subnet now these virtual machines can access the internet through the VPC routing VPC routing decides how to send traffic from the virtual machine instances to the destination the destination could be
139:00 - 139:30 either the other virtual machine instances or the internet moving on let us understand a few important Concepts in VPC like IP addresses routes and firal rules you will find all these Concepts in Google Cloud vpcs console so first let us talk about IP addresses now each virtual machine instances in gcp will have an internal IP address and typically an external IP address the internal IP address is used to communicate between instances in the same VPC Network while the external IP address is used to communicate with
139:30 - 140:00 instances in other networks or the internet these IP address are FML by default but can be statically assigned now fmrl means the IP address will keep changing every time the virtual machine restarts now talking about the VPC routes route tells virtual machine in instances and the VPC Network how to send traffic from an instance to the destination the destination can be either inside the network or outside of Google Cloud which is the internet we can also create custom static routes to direct some packets to specific
140:00 - 140:30 destination now each VPC networks comes with some system generated routes there are two different system generated routes first is a default route this route defines a path for the traffic to leave the VPC Network it provides general internet access to Virtual machines that meets the requirements it also provides the typical path for private Google access next for communication within the network there is subnet routes it defines the path for sending traffic among instances within the network by using internal IP
140:30 - 141:00 addresses but for one instance to communicate with another you must configure appropriate firewall rules because every network has an implied deny firewall rules for Ingress traffic now talking about firewall rules each VPC Network implements a distributed virtual firewall that you can configure firewall rules allow you to control which packets can travel to which destination it lets you allow or deny connection to or from your virtual machine instances based on configuration that you specify now when you're creating a VPC firewall rule you must
141:00 - 141:30 specify the VPC Network and a set of configuration that defines what the rule does the configuration enables you to Target certain types of traffic based on the traffic protocol destination Port sources and destination you can create and modify VPC firewall Rules by using Google Cloud console gcloud command line tool and rest apis now these were some of the important topics in Google Cloud VPC now let us move on to the next topic and see some of the benefits of Google Cloud VPC first is it is global using a
141:30 - 142:00 VPC gives you manag Global networking functionality for all your Google Cloud resources through sub Network known as subnets which are hosted on Google cloud data centers a single Google Cloud VPC and it subnets can span across multiple region without ever connecting to the public internet internet it remains isolated from the outside world and is not associated with any specific region or Zone second benefit is it is sharable now an entire organization can use one VPC and share it across the various team
142:00 - 142:30 different team can be isolated within projects with different billing and quotas yet they can still maintain a shared private IP space and access to commonly used Services the next advantages it is expandable Google Cloud VPC lets you increase the IP space of any subnet with without any workload shutdown or downtime this gives you flexibility and growth option to meet your needs now I guess you have some idea about VPC now let us move on to our next topic and see what is Google Cloud load balancer basically a load balancer
142:30 - 143:00 distributes user traffic across multiple instances of your application by spreading the load load balancing reduces the risk that your application experience performance issues Google Cloud offers six types of load balancer which are external https load balancing SSL and TCP proxy load balancing external TCP UDP Network load balancing internal https load balancing and internal TCP UDP Network load balancing now to decide which load balancer best suits your implementation consider
143:00 - 143:30 factors such as Global and Regional load balancing you can use Global load balancing when your backends are distributed across multiple region your user needs access to the same application and content and you want to provide access by using a single IP address you can use Regional load balancing when you back ends are only in one region the next factor is external and internal load balancing now external load balancer distributes traffic coming from the internet to your Google Cloud VPC Network an internal load balancer distributes traffic to instances within
143:30 - 144:00 the gcp network and the last Factor you need to keep in mind is the type of traffic that you need a load balancer to handle such as https TCP or UDP traffic now this was a brief information about Google Cloud load balancing let us move on to the next topic and understand what is cloud Cloud DNS service Google Cloud provides a scalable reliable and managed domain name service or DNS running on the same infrastructure as that of Google but before we get into Cloud DNS let us understand what DNS is so DNS is
144:00 - 144:30 a hierarchically distributed database that lets you store IP addresses and other data and look them up by names in other words DNS is a directory of easily readable domain name that translate to numerical IP addresses which are used by computers to communicate with each other for example when you type URL into a browser DNS converts the URL into an IP address of a web server associated with that name like www.example.com is translated to IP address of 72. 2201
144:30 - 145:00 19317 then the DNS directories are stored and distributed around the world on domain name servers that are updated regularly now Cloud DNS is a high performance resilient Global DNS service that publishes your domain name to the global DNS in in a cost effective way Cloud DNS lets you publish your zones and Records in DNS without the burden of managing your own DNS servers and software Cloud DNS offers both public zones and private managed DNS zones now
145:00 - 145:30 a public zone is visible to the public internet while a private zone is visible only from one or more virtual private Cloud networks that you specify this was about Cloud DNS now let us move on to a demo part where I will show you how to create a VPC Network in Google Cloud so for a demo I've logged in into a gcp account for people who are new to gcp this is what the gcp console would look like now it is very simple to create a gcp account all you have to do is enter your debit card or your credit card details and your address then you might be charged maybe a rupee but even that
145:30 - 146:00 would be refunded later and after you sign into a new account gcp will provide you $300 free credits you can use this amount to explore Google cloud services you won't be charged until you choose to upgrade and it will be valid for 90 days now coming back to a GCB console you can you can see we have the project info over here now you must have a project in order to use the gcp resources and here will be the list of resources that your projects used here will be the building the monitoring dashboards if you're new
146:00 - 146:30 to gcp you can explore the various Services provided by Google Cloud so demo is going to be very basic where I will explain how to create a VPC and subnets so the first step is to select a project now if you're using gcp for the first time you can just create a new project from here click on new project name your project anything you like and just create it now for this demo I'll just select from an existing project so I'll let this be and now let us search for VPC over here we'll select the VPC
146:30 - 147:00 Network now you can see here that Google Cloud comes with a default VPC and this VPC has 25 subnets each subnet having its own IP address and in a different region as I mentioned before gcp has 25 regions and 76 zones so each sub is created in 25 different zones next you see something called mode over here and you can see there are two types of mode custom and auto we will talk about this when we're creating a VPC and there are four default firewall rules so let us
147:00 - 147:30 now create a new VPC so I'll just go to create VPC Network and we can name our VPC anything we want so I'll just name it demo VPC and you can see only lowercase letters numbers and hyphens are allowed next you can describe your VPC but this is optional so we'll just skip it now coming to subnets we can create a subnet by two different methods one is custom and the other one is automatic if you select automatics subnets are
147:30 - 148:00 automatically created with different IPS in different regions now you can see in automatic 15 subnets are created in 15 different regions if you want you can select any firewall rules from here now as I mentioned before firewall rules allow you to control what packets can track travel to which destination now with the demo VPC allow icmp it will allow only icmp traffic from any source to any instances in the network and with the allow internal firewall rules it will allow connection for all protocols
148:00 - 148:30 and ports among instances in the network next the allow RDP firewall rules allows RDP connection from any source to any instances on the network using Port 3389 the port is given over here next is the allow SSH traffic rule this allows TCP connection from any source to any instances in the network with the destination Port 22 next these two are the default firewall rules and the default routing mode is set to Regional you see Google cloud makes it very
148:30 - 149:00 simple for you to create a VPC all you have to do is just name your VPC select automatic and click on create button and then your VPC will be created but for this demo we're going to select the custom subnet and create only a few subnets so we'll just go to custom now we can name a subnet anything we want we'll just name it demo VPC subnet if you want you can add description about your subnet next we have to select a region so these are the available regions so we'll just select
149:00 - 149:30 Us East one and have to mention the IP address range so I'll just mention 10 0 1.0 24 next we have something called private Google access this means I can set my virtual machines in the subnet to access Google services without assigning an external IP address so I'll just on this so now my virtual machines will be able to access Google services without an external IP address I will let the flow
149:30 - 150:00 locks be off flow locks are just to record the network flow if you want you can on it as well I'm just going to click on done next we can select Regional or Global routing mode Regional routing will route only in the region they were created and Global routing will route to and from all the regions so let it be default Regional now let us create another subnet we just name it demo VPC subnet 2 and the region will select Us West
150:00 - 150:30 one we give the IP ranges 10.0.2 0/24 we'll let the private Google access be on and click on done I'll just create my VPC now VPC has been created this might take a couple of minutes now you can see a VPC successfully created and two subnets are created in US West one and the other one
150:30 - 151:00 in Us East [Music] one now let's have an introduction of database first let's understand why do we need a database a good database is crucial to any company or organization this this is because the database thrs all the pertaining details about the company such as employee records transactional records salary details Etc the various reasons a database is important are it manages large amounts of data a database stores and manages a large amount of data on a daily basis
151:00 - 151:30 actually this would not be possible using any other tools such as a spreadsheet as they would simply not work second it's accuracy a database is pretty accurate as it has all sorts of build-in constraints checks Etc this means that the information available in a database is guaranteed to be correct in most cases third it's easy to update so in a database it is easy to update data using various data manipulation languages available one of these languages is SQL fourth the security of
151:30 - 152:00 data so databases have various methods to ensure security of data there are user logins required before accessing a database and various access specifiers these allow only authorized users to access the database fifth is data Integrity so this is ensured in databases by using various constraints for data data Integrity in databases makes sure that the data is accurate and consistent in a database last one that is easy to research database so if you see like it's very easy to research and access the data in database this is done using data query language which allows
152:00 - 152:30 searching of any data in the database and performing computations on it now that you have understood the need of a database let's briefly understand what actually it is so a database is an organized collection of structured information or data typically stored electronically in a computer system system a database is usually controlled by database management system together the data and the database management system along with applications that associated with them are referred to as a database system often shortened to just database data within the most
152:30 - 153:00 common types of databases in a operation today is typically modeled in rows and columns in a series of tables to make processing and data quering efficient the data can then be easily accessed managed modified updated controlled and organized most databases users structured query language means SQL for writing and querying data databases are used to support internal operations of organizations and to entreprene online interactions with customers and suppliers databases are used to hold administrative information and more
153:00 - 153:30 specialized data such as engineering data or economic models examples include computerized Library systems flight reservation systems or computerized parts inventory system and many content Management systems that store websites as collections of web pages in a database now now that you have an overview of Google Cloud platform as well as of a database now let's understand the different types of gcp databases so the first is relational databases a relational database is a type of database that stores and provides access to data points that are
153:30 - 154:00 related to one another relational databases are based on relational model and intuitive straightforward way of representing data in tables in a relational database each row in the table is a record with a unique ID called the key Columns of the table hold attributes of the data and and each record usually has a value of each attribute making it easy to establish the relationships among data points in a relational database all data is stored and accessed by relations relations that store data are called base relations and in implementations are called tables
154:00 - 154:30 other relations do not store data but are computed by applying relational operations to other relations these relations are sometimes called derived relations in implementations these are called views or queries derived relations are convenient in that they act as a single relation even though they may grab information from several relations each relation or table has a primary key this being a consequence of a relation being a set a primary key uniquely specifies a duple within a
154:30 - 155:00 table while natural attributes I mean attributes used to describe the data being entered are sometimes good primary Keys a foreign key is also there in a relational database management system so a foreign key is a field in a relational table that matches the primary key column of another table it relates the two keys forign Keys need not have unique values in the referencing relation a foreign key can be used to cross reference tables and it effectively uses the values of attributes in the reference relation to like restrict the domain of one or more
155:00 - 155:30 attributes in the referencing relation second is a key value databases so a key value database or key value store is a data storage Paradigm designed for storing the driving and managing associative arrays and a data structure more commonly known today as a dictionary or hash table dictionaries contain collection of objects or records which in turn have many different fields within them each containing data okay so these records are stored and retrived using a key that uniquely identifies the record and is used to find the data within the database so key value
155:30 - 156:00 databases work in very different fashion from the better known relational databases relational databases predefine the data structure in the database as a series of tables containing Fields with well defined data types exposing the data types to the database program allows it to apply a number of optimizations in contrast key value system treats the data as a single opaque collection which may have different fields for every record this offers considerable flexibility and more closely follows modern Concepts like
156:00 - 156:30 objectoriented programming because optional values are not represented by placeholders or input parameters as in most relational databases key value databases often use far less M to store the same database which can lead to large performance gains in certain workloads because optional values are not represented by placeholders or input parameters as in most relational databases key value databases often use far less memory to store the same database which can lead to large performance gains in certain workloads performance a lack of standardization
156:30 - 157:00 and other issues limited key value systems to Nisha uses for many years but the rapid move to cloud computing after 2010 has led to in a sense as part of the broader no SPL movement now the third one is document database so a document oriented database or document store is a computer program and data storage system designed for storing retriving and managing document oriented information also known as semi-structure data document oriented databases are one of the main categories of new SQL databases and the popularity of term
157:00 - 157:30 document oriented database has grown with the use of the term no SQL itself XML databases are a subass of document oriented databases that are optimized to work with XML documents graph databases are similar but add another layer the relationship which allows to link documents for Rapid traversal document oriented databases are like inherently a subass of the key value store and other nosql database concept so the difference lies in the way the data is processed in a key value the data is considered to be inherently opaque to the database
157:30 - 158:00 whereas the document oriented system relies on internal structure in the document in order to extract metadata that the database engine uses for further optimization although the difference is often negligible due to tools in the systems conceptually the document store is designed to offer a richer experience with BM programming techniques so document databases contrast strongly with the traditional relational database like relational databases generally store data in Separate Tables that are defined by the
158:00 - 158:30 programmer and a single object may be spread across several tables so document databases store all information for a given object in a single instance in the database and a very stored object can be different from every other this eliminates the need for object Rel mapping while loading data in the database so the fourth we have is inmemory database and inmemory database IMDb also a main memory database system or mmdb you can say it like memory resident database is a database management system that primarily relies on Main memory of a computer data
158:30 - 159:00 storage it is contrasted with the database management system that employ a disk storage mechanism so in memory databases are like faster than disk optimized databases because dis access is slower than memory access the internal optimization algorithms are like simpler and execute fewer CPU instructions so accessing data in memory eliminat seek time when querying the data which provides faster and more predictable performance than disk applications where response time is critical such as those running telecommunication Network equipment and
159:00 - 159:30 mobile advertising networks often main memory databases so in memory databases have gained much traction especially in the data analytics space started in the mid 2000s mainly due to multicore processors that can like address large memory and due to less expensive Ram a potential technical hurdle with inmemory data storage is the volatility of ram so specifically in the event of a power loss intentional or otherwise data stored in the volatile Ram is lost with the introduction of nonvolatile Random
159:30 - 160:00 Access Memory technology in memory databases will be like able to run at full speed and maintain data in the event of power failure now the last one is additional no SQL databases like a noq database provides a mechanism for storage and retrival of data that is modeled in means other than the tablea relations used in the relational databases so there are additional nosql databases present in gcp like mongodb and others so now that you have understood the types of databases let's Now understand Services under these types of databases so first we have relational databases under which we have
160:00 - 160:30 cloud esql and Cloud spanner so cloudsql is a fully managed database service that makes it easy to set up maintain manage and administer your relational MySQL databases on cloud platform the cloudsql for MySQL connector allows you to access data from CL SQL for MySQL databases within data studio so it key features are like fast and easy migration so database migration service makes it easy to migrate databases from on premises computer engine and other Cloud to cloudsql with minimal downtime so second
160:30 - 161:00 is secure access and connectivity so cloudsql data is encrypted when on Google's internal networks and when stored in database tables or temporary files and backups so cloudsql supports private connectivity with virtual private cloud and every cloudsql instance includes a network firewall allowing you to control public network access to your database instance third is easy integration so access L SQL instance from just about any application easily connect from App engine computer engine Google communities engine and your B station open up analytics
161:00 - 161:30 possibilities by using big query to directly query your cloudsql databases then fourth is standard apis so build and deploy for the cloud faster because Cloud SQL offers standard MySQL or post SQL and SQL Server databases ensuring application compatibility so use stand standard connections drivers and build migration tools to get started quickly then fifth is application compatibility so build and deploy for the cloud faster because cloudsql offers a standard MySQL or post SQL and uh
161:30 - 162:00 Microsoft SQL Server databases ensuring application compatibility then the last one is automatic storage increment so cloudsql can automatically scale up storage capacity when you are near your limit this way you don't have to spend time estimating future storage needs or spend money on capacity until you need it now the question is when to choose Cloud SQL from lift and shift of on premise SQL databases to the cloud to handling large scale SQL data analytics to supporting CMS data storage and SK
162:00 - 162:30 scalability and deployment of microservices cloudsql has many uses and is a better option when you need relational databases capabilities but don't need storage capacity over 10 TV I mean 10 terabytes now coming to Cloud spanner so spner is a distributed SQL database developed by Google spner is a globally distributed database Service and Storage solution it provides features such as Global transactions strongly consistent reads and automatic multi-site replication and failover its key
162:30 - 163:00 features are first Auto sharing Cloud span optimizes performance by automatically sharing the databases on request load and size of the data as a result you can spend less time worrying about how to scale your database and in state focus on scaling your business so second is it is fully manage which means easy deployment at every stage and for any size databases synchronous replication also like synchronous replication and maintenance are automatic and built-in the third one is it has Regional and multi- Regional configurations no matter where your users may be apps backed by Cloud
163:00 - 163:30 spanner can read and write up toate strongly consistent data globally additionally when running a multi-region instance your database is able to survive a regional failure and offers industry leading 99.99% availability so fourth is BU on Google Cloud Network Cloud spanner is built on Google's dedicated Network that provides low latency security and reliability for serving users across the globe fifth is it provides multilanguage support so client libraries in C C++ go Java nodejs
163:30 - 164:00 PHP Python and jdbc drivers for connectivity with popular third party tools last is backup and restore backup and restore recovers the database to the last state when the backup or the export was taken P provides continuous data protection with the ability to recover your pass data to a micro second granularity now the question is when to choose Cloud spanner so Cloud spanner should be your go-to option if you plan on using large amounts of data more than 10 terabyte and need transactional consistency it is also perfect choice if
164:00 - 164:30 you wish to use shedding for higher throughput and accessibility now the next is key value database under which Google provides big table service so big table is a compressed high performance propery data storage system built on Google file system chubby Lock Service SS table and a few other Google Technologies some of its key features are it is built for use cases such as personalization attech pin Tech digital media and iot second is it gives better prediction designed with a storage
164:30 - 165:00 engine for machine learning applications leading to better prediction third is high throughput at low latency so big table is ideal for storing very large amounts of data in a key value store and supports High read and right throughput at low latency for fast access to large amounts of data throughput scales linearly you can increase QPS means queries per second by adding big table nodes so big table is built with proven infrastructure that powers Google products used by billions such as search and Maps then fourth is a cluster
165:00 - 165:30 resizing without downtime scale seamlessly from thousands to millions of reads or writes per second dictable throughput can be dynamically adjusted by adding or removing cluster nodes without restarting meaning you can increase the size of a big table cluster for a few hours to handle a large load then reduce the cluster size again all without any downtime like it's flexible and automatic replication to optimize any workload so write data once and automatically replicate where needed with eventual consistency giving you
165:30 - 166:00 control for high availability and isolation of read and write workloads no annual steps needed to ensure consistency or repair data or synchronize wrs and deletes benefit from a high availability SLA of 99.9% for instances with mult multi cluster routing across three or more regions 99.9% for single cluster instances so next is like it easily connect to Google cloud services such as big query or the ape ecosystem last is it seamlessly scaled to match your storage needs so no downtime during
166:00 - 166:30 reconfiguration now the question is when to choose big table so Cloud big table is a good option if you are using large amounts of single key data and is a preferable for low latency High throughput workloads moving on to the next type of services that is document database Services under which we have Cloud fire store and Firebase so Cloud fire store is a cloud hosted nosql database that your iOS Android and web apps can access directly via native sdks Cloud file store is also available in
166:30 - 167:00 Native node.js Java python Unity C++ and go sdks in addition to rest and RPC apis it is a flexible scalable database for mobile web app and server development from Firebase and Google Cloud some of its key features are first of all it is serverless which helps you in focusing on your applications development using a fully managed serverless database that ferly scales up or down to meet any Demand with no maintenance windows or downtime second is life synchronization and offline mode bu live synchronization
167:00 - 167:30 and offline mode makes it easy to build multi-user collaborative applications on mobile web and iot devices including workloads consisting of live assess tracking activity tracking realtime analytics media and product cataloges Communications social user profiles and gaming leaderboards third is powerful query engine F story allows you to run sophisticated acid transactions against your document data this gives you more flexibility in the way you structure your data fourth is libraries for popular languages focus on your
167:30 - 168:00 application development using fir store client side development by libraries for web iOS Android cutter C++ and unity the fire store also supports traditional server site development libraries using nodejs Java G Ruby and PHP fifth is security the fire St seamlessly integrates with Firebase authentication and identity platform to enable customizable identity based security access controls and enables data validation via a configuration language so the last one is data store mode fire store supports the data store API you won't need to
168:00 - 168:30 make any changes to your existing data St apps and you can expect the same performance characteristics and pricing with the added benefit of a strong consistency existing cloud data store databases will be automatically upgraded to fire store in the next like year of 2022 also so now the question is when to choose fire store when you focus lie on app development and you need life synchronization and offline support now coming to Firebase realtime database over the last few years Firebase has grown to become Google's app development
168:30 - 169:00 platform it now has 16 products to build and grow your app if you have used Firebase before you know it already offer a database which is uh the Firebase realtime database so the Firebase realtime database with its client sdks and real time capab abilities is all about making app development faster and easier since its launch it has been adopted by hundreds of thousands of developers and as its adoption grw so did usage patterns let's discuss some of its key features so first of all it provides realtime synchronization for Json data so
169:00 - 169:30 developers began using the realtime database for more complex data and to build bigger apps pushing the limits of the Json data model and the performance of the database at scale the FAS realtime database is a cloud hosted nosql database lets you store and synchronize data between your users in real time in its new update Cloud fire store enables you to store synchronize and query app data at global scale second is like collaborate across devices with e so realtime synchronization makes it easy for your
169:30 - 170:00 users to access their data from any device web or mobile and it helps your users collaborate with one another third is build serverless apps so real time database ships with mobile and web sdks so you can build applications without the need of servers you can also execute backend code that responds to events triggered by your database using Cloud functions for vabase fourth is optimized for online use so when users go offline the real-time database sdks use local cach on the device to serve and store
170:00 - 170:30 changes when the devices come online the local data is automatically synchronized the last one is a strong user based security the realtime database integrates with Firebase authentication to provide simple and uh intuitive authentication for Developers you can use Google's declarative security model to allow access based on user Identity or with pattern matching on your data so the use cases for Firebase realtime database involve development of applications that work across devices advertisement optimization and
170:30 - 171:00 personalization and third party Payment Processing now moving on to the next type of service that is inmemory database Services under which Google provides memory store so memory store reduces latency with scalable secure and highly available in memory service for redis and MIM cach memory store automates complex tasks for open source redis and MIM cach like enabling High availability failover patching monitoring so you can spend more time coding start with the lowest and smallest size and then grow your instance with minimal impact memory
171:00 - 171:30 store for mcast can support clusters as large as 5 terab supporting millions of QPS at very low latency so memory store for Rus instances are replicated across two zones and provide 99.9% availability SLA instances are monitored constantly and with automatic failover applications experience minimal destructure some of its key features are choice of engines so choose from the two most popular open- Source caching engines to build your applications memory store supports
171:30 - 172:00 both redis and MC and is fully protocol compatible choose the right engine that fits your cost and availability requirements second is security so memory store is protected from the internet using virtual private Cloud networks and private IP and comes with the IM am integration all designed to protect your data systems are monitored 24/7 and 365 days ensuring your applications and data are protected third is it's fully manage so provisioning replication failover and patching are all automated which
172:00 - 172:30 drastically reduces the time you spend doing devops fourth is monitoring so monitor your instance and set up custom alerts with Cloud monitoring you can also integrate with the open senses to get more insights to client side metrix fifth is uh it's highly available so standard tire memory store for Rus instances provide a 99.9% availability SLA with automatic failover to ensure that your instance is highly available you also get the same availability SLA from M cach instances last is migration
172:30 - 173:00 so memory store is compatible with open source protocol which makes it easy to switch your applications with no code changes you can Leverage The Import and Export feature to migrate your redis and cash instance to Google Cloud now the question is when to choose a memory store if you are using key value data sets and your main focus is transaction latency then you can go for memory store now moving on to the last type of database services that is additional nosql database Services under which we
173:00 - 173:30 have mongodb atas and Google Cloud partner services so mongodb is a source available cross platform document oriented database program classified as a nosql database program mongod uses Json like document with optional schemes and mongodb Atlas is the best way to deploy manage and grow mongodb on Google Cloud engineered and run by the same Engineers that build the database Atlas incorporates best practices developed from supporting thousands of distributed deployments into an easy to use fully
173:30 - 174:00 automated service that grants you the power and freedom to focus on what really matters building and optimizing your applications its key features are first its database operations are automated as database operations take time and so mongodb Atlas lowers your stress by orchestrating the moving Parts deploying clusters in minutes modifying them on Demand with zero down time and taking advantage of automated patches so the second is highly scalable so at storage scale up scale out or scale down
174:00 - 174:30 with the push of a button or a simple API call mongodb Atlas allows you to easily twick the dimensions of your deployment with no impact to your applications third is comprehensive disaster recovery so mongodb atas includes an optional fully managed backup service that provides continuous backups with point in time recovery query your backup snapshots and restore granular data sets in a fraction of the time easily restore snapshots to different projects to rapidly spin up
174:30 - 175:00 developer or test environments fourth it's highly available backed by uptime slas each cluster is distributed across the zones in a Google Cloud region ensuring no single point of failure should your primary fail mongodb Atlas will automat automatically trigger the election and the failover process with no manual intervention required production databases are backed by 99.95 uptime Fifth is global clusters so Global clusters in mongodb Atlas allow you to deploy a geographically distributed database that provides low
175:00 - 175:30 latency responsive reads and rights to users anywhere with strong data placement controls to satisfy emerging regulations and the last one is full performance visibility and optimization so mongodb Atlas includes optimized dashboards that highlight key historical metrics view performance in real time customize alerts or dig into the details with ease so bu tools such as the performance advisor highlight slow running queries and suggest indexes to help optimize your database now the next we have is uh Google Cloud partner
175:30 - 176:00 services so you can choose go Cloud partner service on the basis of regions specialization expertise initiatives and products but why work with the Google Cloud partner so first of all it's uh flexible so Google cloud is a Global Network means you can work with a Google Cloud partner that best fits your organization's needs the second is uh for the knowledge purpose like collaborate with a partner that has the right industry background to unlock your next level of business growth and the third and the last one is experience so have the confidence to tackle your
176:00 - 176:30 toughest business challenges with the support of a partner with proven success so one more thing is like Google partners are the ones you can trust actually so partners that have certified their teams earned expertise and Achieve specialization have the Google validated skills to help you achieve your goals so partners are divided like in three forms so first is the specialization so specialization is the highest technical designation a partner can earn Partners who have achieved a specialization in a solution area have and established Google cloud service
176:30 - 177:00 practice consistent customer success and proven technical capabilities Ved by Google and a third party accessor like there's a specialization in application development Cloud migration data analytics data management Internet of Things machine learning marketing Analytics Etc second is expertise so partners with the expertise designation have demonstrated proficiency and have exhibited customer success through the combination of experience in a specific industry workload or product so for example like there's a expertise in industry and expertise in Google Cloud
177:00 - 177:30 product or technology and workload third is certification so partners with thems of Google Cloud certified individuals have the validated technical knowledge and advanced skills to address your business's needs through implementing Google Cloud techn IES there are a lot of certifications like foundational digital Cloud leader associate Cloud engineer professional Cloud architect professional Cloud developer professional data engineers and various others now let's look at some solutions provided by the gcp database so the first one is database migration database
177:30 - 178:00 migration is the process of selecting preparing extracting and transforming data and permanently transferring it from one computer storage system to another with Google Cloud you can simplify your database migration at every step of your Cloud Journey so migrate to Google Cloud databases to run and manage your databases at global scale while optimizing for efficiency and flexibility there are two database migration strategies first one is uh move to the same type of database so lift and shift databases to Google Cloud using databases like cloudsql for MySQL
178:00 - 178:30 cloudsql for post SQL cloudsql for MySQL server cloud memory store for redis and Cloud big table along with Google's open source partner databases like mongodb data Stacks elastic newo 4J influx data and R Enterprise database migration service for cloudsql can help to minimize downtime during migration second strategy is move to a new type of database whether you are moving from propety to open sour databases or modernizing from traditional databases
178:30 - 179:00 to scalable Cloud native databases we have a solution for you leverage your serverless change data capture and replication service data stream to synchronize data across databases storage systems and applications Google's migration assessment guides and partners can help you get started now moving on to the next solution that is database modernization so database modernization is primarily about changing applications to work around functional discrepancies between old and new databases with Google Cloud you can modernize underlying operational
179:00 - 179:30 databases to make your apps more secure reliable scalable and easier to manage our fully managed Solutions reduce complexity and increase agility so you can focus on Innovation so you can upgrade the databases on which your application are built on by being prepared for growth with quick seamless scaling which means scale Google Cloud databases seamlessly and build Cloud native apps that are prepared to handle seasonal surges or unpredictable growth second move faster and focus on business value which means enable developers to
179:30 - 180:00 ship faster and perform less maintenance with database features like serverless management autoscaling and deep Integrations and the third one is build more powerful applications with Google Cloud to transform your business with robust ecosystem system of services like Google kubernetes engine easily access data for analytics and AIML with big query and Google Cloud AI so the third database solution is open- Source databases an open source database allows users to create a system based on their unique requirements and business needs
180:00 - 180:30 it is free and can also be shared the source code can be modified to match any user preference so open source databases address the need to analyze data from a growing number of new applications at lower cost with Google Cloud fully managed open source database promote innovation without ventor log in or high licensing fees Google cloud and our partners help you deploy secure open source databases at scale without managing infrastructure so make the most of Google Cloud's commitment to open source by first line support So Google
180:30 - 181:00 provide first line support for open source databases so you can manage and log support tickets from a single window second is simple billing so whether you are using no SQL or relational databases you will only see one bill from Google Cloud third is single console you can provision and manage partner open source database Services Straight from your Google Cloud console now the last major gcp database solution is the SQL server on Google Cloud so SQL Server is a relational database management system
181:00 - 181:30 developed by Microsoft as a database server it is a software product with the primary function of storing and retriving data as requested by other software applications which may run either on the same computer or on other computer across a network its key features are lift and shift SQL Server so migrate your existing workloads to cloudsql or Cloud Server running on computer Engine with full compatibility SQL Server running on Google Cloud works with all of your familiar tools like SS SMS and visual studio connect your
181:30 - 182:00 existing workloads to the best of what Google cloud has to offer second is reduce operational overhead so cloudsql for SQL Server is a fully managed database service with a 99.95% SLA being fully managed includes upgrades patching maintenance backups and tuning Regional availability in various virtual machine shapes with memory from 3.75 GB up to 416 GB and storage up to 30tb for all workloads provide flexible scaling options to
182:00 - 182:30 eliminate the need to pre-provision or plan capacity before you get started now the last one is a live migration for underlying virtual machines when you run SQL server on computer engine the virtual machines can like migrate between host systems without rebooting which keeps your applications running even when host systems require maintenance now let's understand the methods of deploying databases on Google Cloud platform so gcp predominantly offers three type of reference architecture model for Google data
182:30 - 183:00 distribution in that the first one is single Cloud deployment the simplest of all deployment models one can deploy databases by creating new Cloud databases on Google and or by lift and shift or pre-existing workloads so the second is hybrid Cloud deployment these types of deployments are useful when one has applications in the cloud that needs to access on premises databases or vice versa there are three primary factors to be considered when deploying a hybrid model with some data on cloud platform and some on premises
183:00 - 183:30 so that that's why it's been shown like public cloud and private cloud and so in primary factors first one is a master database first and foremost you need to decide whether your master database is stored on Prem or on the cloud once you choose the cloud gcp resources act as a data hub for on premises resources whereas if you choose on premises your in-house resources synchronized data to the cloud for remote use or backup second is manage services so available
183:30 - 184:00 for resources in the cloud these Services comprise scalability redundency and automated backups you however have an option of using third party manage services third is portability So based on the type of data store you Cho choose the portability of your data is affected too to ensure reliable and consistent transfer of data you need to consider across platform toore such as MySQL so the Third Kind of deployment is multicloud deployment these types of deployments can help you effectively
184:00 - 184:30 distribute your database and create multi fail saves as it enables you to combine databases deployed on Google cloud with database services from other Cloud providers thereby giving you an advantage of a wider array of propretary cloud features and there are two primary factors to be considered when deploying this model first is integration so ensuring that client systems can seamlessly access databases regardless of the cloud they are deployed on for instance use of open-source client libraries to make databases smoothly
184:30 - 185:00 available across clouds second is migration so since there are multiple Cloud providers one may need to migrate data between clouds with the help of database replication tools or export or import processes so Google storage transfer service is one such tool to help you with database migration now that you have a theoretical understanding of gcp database Services let's now deploy a database service on Google Cloud platform so you can just go to just to search for Google Cloud
185:00 - 185:30 platform just open this link and go to the console you can also go to the documents part and you can understand more about the database Services quite briefly but right now let's because we are going to like demonstrate it practically let's go to the console so this is how the Google Cloud platform console looks like also if you don't have an account on Google Cloud platform then create one like Google platform is a very nice platform to have your account on you get a free Tire in it like you can see here my free trial has
185:30 - 186:00 like 4711 credits and 23 days left on it what happens is you get a free trial of 90 days in which you get the 300 us of credit so you can use that credit like doing the exercise I'm going to explain you in this also like when you create this account you will it will just debit one rupe and it will also get refunded to you quite soon so you just have to give your credit and debit card details also for it okay so let's start you just go to My Demo this place this demo is just the name of my project you have to create your own project you can just go to new
186:00 - 186:30 project and create your project give it a name and that's it I have this demo and several other projects so I will go with demo only so there's the name of my project so these are the project name project ID project number is given so what I'm going to explain here today is uh like this database Services Cloud SQL so Google Cloud SQL service so where I will be creating an SQL instance and also like the database also I will be creating in it and also a user account okay so you can just go here and go to
186:30 - 187:00 databases from here you can go to SQL or you can search from here just search here SQL so yeah we are here let's go to SQL okay this is like one of the instance which I have already created so that's why it's showing like this but when you open it for the first time it asks for what kind of instance you want to create like MySQL or the post SQL or any other so because I have already created one so it's not asking me I will just go and create the instance from here yeah this page you will get okay
187:00 - 187:30 that's what I was thinking SQL Server post SQL or MySQL okay so I will be creating MySQL so you can give a name one more thing it should be unique it should like cannot repeat the name of an instance again and again okay so give it a unique name so I just give database service demo also there should be one number so yeah dat service demo one okay also give the password if you want to right now we are just demo so I will
187:30 - 188:00 choose no password also choose the version from here for it right now I'm going with MyQ 5.7 only also select your region like I have this region nearest to you like I have region nearest to me is Asia South one also select your zone so like Asia South okay also you can like do further customizations so let's see the machine type so select whichever is suitable for you like right now I don't require quite a high one like so I will go for a standard one but like here we have like in high memory we have four
188:00 - 188:30 virtual CPUs and 26 GB memory 52 GB memory with eight virtual CPUs and then 16 virtual CPUs 4GB M then you can customize it by yourself also so you can just give by yourself whatever you want to select okay right now I will go with the standard one so that's more than enough because the more higher the features you will choose the more it will cost okay so remember that so I will just remember standard one virtual CPUs 3.75 GB and then you can choose SSD
188:30 - 189:00 or SGD whatever you want to choose so I just choose 10 GB it's more than enough for me right now it was just for demo purpose so yeah then we can also go for connections public private whatever IP is there you can choose for that select that okay and you can then just go to backups like at what time you want to get your backup so you choose the time yeah you can select the region also it's just okay my region is okay then for maintenance purpose you can like on which day it gets serviced okay for maintenance so that's it and also you
189:00 - 189:30 can select your flag labels or whatever you want to choose okay so everything is done now so let's create the instance okay use lowercase letters okay remember that to use the lower case letters okay also like use lower case letters numbers and high pens start with a letter okay strings are given keep that in mind so let's create the
189:30 - 190:00 now so as you can see here that instance has been created so here it is and here you can see the overview of it like the things are being given you can see here like the public ID is given and this is just the CP utilization it's given and now right now it's not utilizing so it's not showing anything so the configurations are given and everything one more like suggestion service account and everything is given but one cool thing let you know about this instances
190:00 - 190:30 you can connect it through Cloud shell so just click here so it's getting connected with Cloud shell so gcloud SQ connect database so it will show here just enter authorize it it will take some time a few minutes it will take yeah not more than that so you can see like it's asking for password but I haven't created password so I will just enter But if you created the password just enter the password and
190:30 - 191:00 go for the next step so now you can uh see the databases so just type for it show databases these are the databases and if you want to create a database in it you can just type create database demo or you can just give test database service okay so yeah okay just don't give hypen or anything like that okay just say create database
191:00 - 191:30 test okay yeah so it's called creat you can just show databases and you can see here test has been created okay so now you can just uh exit from here so exit so you have to exit again now it will be exited from cloud shell okay so now you can come to databases so under database SE demo 1 that's the instance name of our SQL database so yeah you can see
191:30 - 192:00 like I mean this is the name of our SQL instance database s demo1 so here you can see like test is also being created okay and also like you can come to users also here also default one is there root so you can create a database from here also like just have to click here create database give it a name test one one okay yeah let's create it from here similarly test one is created okay
192:00 - 192:30 similarly you can go to users can create a user account with the name group 66 that's it and give a password also it's option but I just we can just give one go through 662 just create so the user account will be created yeah okay that's how users and databases are been created I hope you have understood it now how to create it through cloud share and everything that's how the instance are being created through cloudsql
192:30 - 193:00 now what actually is big table let's understand that first of all Google big table is a key value database base a key value database is a data storage Paradigm designed for storing retriving and managing associative arrays and a data structure more commonly known today as dictionary or hash table so Google provides the key value database service in the form of big table it is a distributed storage system for structured data also it is compressed
193:00 - 193:30 high performance propriety data storage system which is built on Google file system chubby loock service SS table and few other Google Technologies you can see here how many of the world's leading companies are choosing Google Cloud to help them innovate faster make smarter decisions and collaborate from anywhere now moving ahead let's understand some of the key features of Google big table so first of all it has high throughput at low latency so big table is ideal for storing very large amount of data in a key value store and supports High read
193:30 - 194:00 and right throughput at low latency for fast access to large amounts of data throughput scales linearly you can increase QPS means queries per second by adding big table nodes big table is built with proven infrastructure Str that powers Google products used by billions such as search and Maps second is cluster resizing without downtime scale seamlessly from thousands to millions of reads and wres per second pict table throughput can be dynamically adjusted by adding or removing cluster nodes without restarting meaning you can
194:00 - 194:30 increase the size of a big T cluster for a few hours to handle a large load then reduce the cluster size again all without any downtime third is flexible and automated application to optimize any workload write data once and automatically replicate where needed with eventual consistency giving you control for high availability and isolation of read and right workloads no manual steps needed to ensure consistency repair data or synchronize wrs and deletes benefit from a high availability SLA of 99.9% for instances
194:30 - 195:00 with multicluster routines across three or more regions now let's understand the architecture of cloud big table so we will understand it step by step so first of all you can see here like how client requests go through a front end server and then notes are organized into a cloud big table cluster of a cloud big table instance each note in the cluster handles a subset of the request to the cluster and then notes are added to increase the number of simultaneous request to handle maximum throughput the
195:00 - 195:30 table is shared into blocks of continuous rows called tablets similar to Edge based regions tablets are stored on Colossus Google's file system in SS table format and SS is an ordered immutable map from keys to value and both are bite strings tablet is associated with a specific node like rights are stored in Colossus shared log as acknowledged then data is never stored in notes themselves nodes have pointers to a set tablets stored on
195:30 - 196:00 Colossus rebalancing tablets from one Noe to another is very fast recovery from the failure of a note is very fast when a cloud big table note fails no data is lost I hope you have understand the architecture let's now have a look at the data model of Google big table a big table is a SP distributed persistent multi-dimensional sorted map the map is indexed by a row key column key and a time stamp each value in the map is an uninterpreted array of pites I settled
196:00 - 196:30 on this data model after examining a variety of potential uses of a big table like system as one concrete example that drove some of our design decisions suppose we want to keep a copy of a large collection of web pages and related information that could be used by many different projects let us call this particular table the web table in web table we would use URLs as row Keys various aspects of web pages as column names and store the contents of the web pages in the contents column under the
196:30 - 197:00 time stamps when they are fed as Illustrated in the figure so in this figure you can see a slice of an example table that stores web pages the row name is a reversed URL the contents column family contains the page contents and the anchor column family contains the text of any anchors that reference the page okay CNN homepage is referenced by both the Sports Illustrated and the mic look homepages so the row contains columns named anchor cnns i.com and anchor my. CM each anchor cell has one
197:00 - 197:30 version the contents column has three versions at time stamps T3 T5 and T6 now let's understand the keys in the data model so first is row key so the row keys in a table are arbitrary strings currently up to 64 Keb in size through 10 to 100 bytes is a typical size for most of the users every read or WR of a data under a single row key is atomic regardless of the number of different columns being read or written in the row a design decision that makes it easier for clients to reason about the systems
197:30 - 198:00 behavior in the presence of concurrent updates to the same grou the table maintains data in lexicographic order by row key the row range for a table is dynamically partition each row range is called a tablet which is the unit of distribution and load balancing as a result reads of a short row ranges are efficient and typically require communication with only a smaller number of machines clients can exploit this property by selecting their row keys so that they get good locality for their
198:00 - 198:30 data accesses then the second key is for like the column family so column keys I mean so column keys are grouped into set called column families which form the basic unit of access control all data stored in a column family is usually of the same type we compress data in the same column family together so a column family must be created before data can be stored under any column key in the family after a family has been created any column key within the family can be used it is our intent that the number of
198:30 - 199:00 distinct column families in a table be small in the hundreds of at most okay and that fames rarely change during operation in contrast a table may have a unbounded number of of columns a column key is named under the syntax family is to qualifier so column family names must be print table but qualifiers maybe arbitrary strings an example for column family for the web table is language which stores the language in which a web page was written we use only one column
199:00 - 199:30 key in the language family and it stores each web Page's language ID another useful column family for this table is Anchor each column key in this family represents a single anchor as shown in figure the qualifier is the name of the referring site the cell contents is the link text access control and both disk and memory accounting are performed at the column family level in our web table example we can see like their controls allow us to manage several different types of applications some that add new
199:30 - 200:00 base data some that read the base data and create derived column families and some that are only allowed to view existing data and possibly not even to view all of the existing families for privacy reasons now the third key is time stamps each cell in a big table can contain multiple versions of the same data these versions are indexed by time stamp big table time stamps are 64bit integers they can be assigned by big table in which case they represent real time in micros seconds or be explicitly
200:00 - 200:30 assigned by client applications applications that need to avoid collisions must generate unique time stamps themselves different version of sell are stored in decreasing time stamp order so that the most recent versions can be read first to make the management of version data less vorous we support two per column family settings that tell big table to garbage collect cell versions automatically the client can specify either that only the last end versions of a sell be kept or that only new enough versions be kept example only
200:30 - 201:00 keep values that were written in the last seven days in our web table example we set the time stamps of the craw Pages stored in the contents column to the times at which these Pages versions were actually crawled the garbage collection mechanism described above lets us keep only the most recent three versions of every page now let's understand the use cases for cloud big table you can use Google Cloud big table to store and query all of the following types of data the first is marketing data such as
201:00 - 201:30 purchase histories and customer preferences second is financial data such as transaction histories stock prices and currency exchange rates then Internet of Things data such as usage reports from energy meters and Home Appliances also the time series data such as CPU and memory usage over time for multiple servers and last like graph data such as information about how users are connected to one [Music] another first let's understand why did
201:30 - 202:00 Google release big query and why would you use it instead of a more established solution so Google bigquery was designed as a cloud native data warehouse it was built to address the needs of data driven organizations in a cloud first world bqu is gcps serverless highly skillable and cost- effective cloud data warehouse it provides both batch and streaming modes to load data it also allows importing data directly from certain software as a service applications using big query data
202:00 - 202:30 transfer service second ease of implementation so building your own is expensive time consuming and difficult to scale with big query you need to load data for first and pay only for what you use third it's speed it process billions of rows in seconds and handle realtime analysis of a streaming data now that you have understood why we need big query let's Now understand what actually it is so bigquery is a fully managed seress data warehouse that enables
202:30 - 203:00 scalable analysis over petabytes of data it is platform as a service that supports querying using nsql it also has built-in machine learning capabilities so you can build and operationalize machine learning Solutions with simple SQL you can easily and securely share insights within your organizations and Beyond as data sets queries spreadsheets and reports B query allows organizations to capture and analyze data in real time using its powerful streaming inje capability so that your insights are
203:00 - 203:30 always Uh current and it's free for up to 1 tbyte of data analyzed each month and 10 GB of data stored viy service manages underlying software as well as as infrastructure including scalability and high availability now let's look at some of the key features of big query so first of all it's serverless the serverless data warehousing gives you the resource you need when you need them with big query you can focus on your data and Analysis rather than operating anzing Computing resources so second is
203:30 - 204:00 paby scale bqu is fast and easy to use on data of any size with big query you will get great performance on your data while knowing you can scale seamlessly to store and analyze petabytes more without having to buy more capacity third is realtime analytics so B's high-speed streaming inje API provides a powerful foundation for realtime analytics bqu allows you to analyze What's Happening Now by making your latest business data immediately
204:00 - 204:30 available for analysis fourth is flexible pricing models so bigquery enables you to choose the pricing model that best suits you on demand pricing lets you pay only for the storage and computation that you use flat rate pricing enables high volume users or Enterprises to choose a stable monthly cost for analysis fifth is automatic High availability free data and computer application in multiple locations means your data is available for query even in the case of extreme failure modes B
204:30 - 205:00 query transparently and automatically provides durable replicated storage and high availability with no extra charge and no additional setup the sixth one is data encryption and security you have full control over who has access to the data stored in bqu bqu makes it easy to maintain strong security with fine grain identity and access management with Cloud identity and access management and your data is always encrypted at rest and in transit seventh is standard SQL so big query supports a standard SQL
205:00 - 205:30 dialect which is Ani 2011 complaint reducing the need for code rewrite and allowing you to take advantage of advanced SQL features bqu provides free odbc and gdbc drivers to ensure your current application can interact with bigquery's powerful engine this is a Savage for Less knowledgeable people like me who hates learning new stuffs every day e is foundation for AI so bigquery provides a flexible powerful foundation for machine learning and
205:30 - 206:00 artificial intelligence besides bringing ml to your data with bigquery ML Integrations with Cloud ml engine and tensor flow enable you to train powerful models on structured data moreover big queries ability to transform and analyze data helps you get your data in shape for machine learning ninth and the last one is foundations for bi that is business intelligence so bqu forms the data warehousing backbone for modern BI Solutions and enabl seamless data integration transformation analysis visualization and Reporting with tools
206:00 - 206:30 from Google and its technology Partners let's now Deep dive into big query and have an architectural understanding of it under the hood bigquery employs a vast set of multi- Services driven by low-level Google infrastructure Technologies like Dremel Colossus Jupiter and Borg it takes more than just a lot of Hardware to make your queries run fast the query requests are powered by the DRL query engine which orchestrates your query by breaking it up into pieces and reassembling the
206:30 - 207:00 results Dremel turns your SQL query into an execution tree the leaves of the tree it calls slots and do the heavy lifting of reading the data from Colossus and doing any computation necessary the branches of the tree are mixers which perform the aggregation in between is Shuffle which takes advantage of Google's Jupiter Network to move data extremely rapidly from one place to another the mixers and Slots are all run by Bor which dos out Hardware resources
207:00 - 207:30 Dremel dynamically abortions slots to queries on an as needed basis maintaining fairness amongst multiple users who are all querying at once a single user can get thousands of slots to run their queries Dremel is widely used to add Google from search to ads from YouTube to Gil so there's great emphasis on continuously making Dremel better theery users get the benefit of continuous improvements in performance durability efficiency and scalability
207:30 - 208:00 without downtime and upgrades associated with traditional Technologies also B relies on Colossus which is Google's latest generation distributed file system each Google data center has its own Colossus cluster and each Colossus cluster has enough discs to get every bigquery user thousands of dedicated discs at a time Colossus also handles replication recovery and distributed management Colossus is fast enough to allow big query to provide similar
208:00 - 208:30 performance to many inmemory databases but leveraging much cheaper yet highly parallelized scalable durable and performant infrastructure bqu leverages the column input output storage format and compression algorithm to store data in Colossus in the most optimal way for reading large amounts of structured data Colossus allows big query users to scale to dozens of petabytes and storage seamlessly without paying the penalty of attaching much more expensive compute
208:30 - 209:00 resources typically with most traditional databases to give you thousands of CPU codes dedicated to processing your task bqu takes advantage of B which is Google's large scale cluster management system B clusters run on dozens of thousands of machines and hundreds of thousands of cores so your query which used 3300 CPUs only used a fraction of the capacity reserved for big query and bigquery's capacity is only a fraction of the capacity of a bog cluster bog assigns server resources to
209:00 - 209:30 jobs the job in this case is the Dremel cluster machines crash power supplies fail Network switches die and a myate of other problems can occur while running a large production data center Bor Roots around it and the software layer is abstracted at Google scale thousands of servers will fail every single day and B protects us from the failures someone unplugs a rack in the data center in this middle of running your query and you will never notice the difference besides obvious needs for resource
209:30 - 210:00 coordination and compute resources Big Data workloads are often throttled by networking through put Google's jupyter Network can deliver one pabit per second of total bisection bandwidth allowing us to efficiently and quickly distribute large workloads Jupiter networking infrastructure might be the single biggest differentiator in Google Cloud platform it provides enough bandwidth to allow one lakh machines to communicate with any other machine at 10 GBS the
210:00 - 210:30 network bandwidth needed to run our query would use less than 0.1% of the total capacity of the system this full duplex bandwidth means that locality with the cluster is not important if every machine can talk to every other machine at 10gbps rexs don't matter traditional approaches to separation of storage and compute include keeping data in an object store like Google Cloud Storage or AWS S3 and loading the data on demand to Virtual machines this
210:30 - 211:00 approach is often more efficient than co-tenant architectures like hdfs but is subject to local virtual machine and object storage through put limits Jupiter allows allows us to bypass this process entirely and read terabytes of data in seconds directly from storage for every SQL query in the end these low-level infrastructure components are combined with several dozen high level Technologies apis and services like big table spanner and stubby to make one
211:00 - 211:30 transparent powerful analytics database big query the ultimate value of big query is not in the fact that it gives you incredible Computing scale it's that you are able to leverage this scale for your everyday SQL queries without ever so much as thinking about software virtual machines networks and discs bigquery is truly a serverless database and bigquery simplicity allows customers with dozens of pedabytes to have a nearly identical experience as its free
211:30 - 212:00 tire customers now that we have reviewed the high level architecture of big query we now need to look at the big query storage organizations and format let's dive right into it in this first let's understand big query resource model big query organizes data tables into units called data sets these data sets are scoped to your GCB project these multiple Scopes project data set and table helps you structure your information logically you can use multiple data sets to Separate Tables
212:00 - 212:30 pertaining to different analytical domains and you can use project level scoping to isolate data sets from each other according to your business needs so in this the first one are project projects are root name spaces for objects which contain multiple data sets jobs Access Control list and IM IM roles it also control billing users and user privileges second is data sets which is a collection of related tables of views together with labels and descriptions it allows access control at data set level
212:30 - 213:00 and Define location of data that is multi-regional or Regional that is like if it is multi-regional it can have like us or European union or any other multi regions and if it is just or like Regional so it can be like Asia North East it specifically goes into region right so that's how then there are tables so which are collections of columns and rows stored in managed storage and defined by a schema with strongly typed Columns of values and also allow Access Control at table level
213:00 - 213:30 and column level the next is views so which are virtual tables defined by SQL carry and it allows access control at view level and the last one is jobs these are actions run by big query on your behalf like load data export data copy data or query data jobs are like executed a synchronously the next in this we have is storage management now let's review how bigquery manages the storage that holds your data traditional relational databases like MySQL store data row by row means record oriented
213:30 - 214:00 storage this makes them good at transaction updates and olp means online transaction processing use cases then we have is big pery bigquery on the other hand uses columnar storage where each column is stored in a separate file block this makes bigquery an idle solution for oap means online analytical processing use cases you can stream data easily to Big query tables and update or delete existing values B query supports
214:00 - 214:30 mutations Without Limits mutations like insert update merge or delete bqu uses variations and advancements on columnar storage internally bqu to data in a propiety column format called capacitor which has a number of benefits for data warehouse workloads B query uses a prop fre format because the storage engine can evolve in tandem with a query engine which takes advantage of deep knowledge of the data layout to optimize query execution each column in the table is is
214:30 - 215:00 stored in a separate file block and all the columns are stored in a single capacitor file which are compressed and encrypted on disk theery uses query access patterns to determine the optimal number of physical charts and how data is encoded the actual persistence layer is provided by Google's distributed file system Colossus where data is automatically compressed encrypted replicated and distributed Colossus ensures durability using Erasure
215:00 - 215:30 encoding to store redundant chunks of data on multiple physical disc this is all accomplished without impacting the computing power available for your queries separating storage from compute allows you to scale to petabytes in storage seamlessly without requiring additional expensive compute resources there are a number of other benefits of decoupling compute and storage you can also take advantage of long-term storage you can load data into big query at no
215:30 - 216:00 cost because big query storage costs are based on the amount of data stored like first 10gb is free each month and whether storage is considered active or long-term if you have a table or partition modified in the last 90 days it is considered as active storage and incurs a monthly charge for data storage at Big query storage rates if you have a table or partition that is not modified for 90 consecutive days it is considered long-term storage and the price of storage for that table automatically
216:00 - 216:30 drops by 50% to the same cost as cloud storage near Line Discount is applied on per table per partition basis if you modify the data in the table the 0 Days Counter resets a best practice when optimizing cost is to keep your data in big query rather than exporting your older data to another storage option such as cloud storage take advantage of bigquery's long-term storage pricing this means not having to delete all data or architect or data archival process since the data remains in big query you
216:30 - 217:00 can also query older data using the same interface at the same cost levels with the same performance characteristics let's now learn how to load or in just data into big query and analyze them there are multiple ways to load data into big query depending on data sources data formats load methods and use cases such as first one as batch injection second streaming injection third one we can say as data transfer service so the first one is best injetion so best inje involves loading
217:00 - 217:30 large bounded data sets that don't have to be processed in real time they are typically ingested at specific regular frequencies and all the data arrives at once on not at all the industri data is then queried for creating reports or combined with other sources including real time B query batch load jobs are free you only pay for storing and quering the data but not for loading the data for batch use cases cloud storage is the recommended place to land incoming data it is a durable highly available and cost effective object
217:30 - 218:00 storage service loading from cloud storage to Big query supports multiple file formats like CSV Json AO parit and oi second one is streaming inje the streaming inje supports use cases that require analyzing High volumes of continuously arriving data with near realtime dashboards and queries tracking mobile app events is one example of its pattern the app itself or the servers supporting its backend could record user interactions to an event ingestion
218:00 - 218:30 system such as Cloud pops up and stream them into big query using data pipeline tools such as cloud data flow or you can go serverless with Cloud functions for low volume events you can then analyze this data to determine overall Trends such as areas of high interaction or problems and monitor error conditions in real time bigquery streaming injection allows you to stream your data into bigquery one record at a time by using the table data do insert all method the
218:30 - 219:00 API allows uncoordinated inserts from multiple producers injested data is immediately available to query from the streaming buffer within a few seconds of the first first streaming insertion it might take up to 90 minutes for data to be available for copy and Export operations however one of the common patterns to ingest realtime data on Google Cloud platform is to read messages from cloud pop sub topic using Cloud dataflow pipeline that runs in a streaming mode and writes to Big query tables after the required processing is
219:00 - 219:30 done the best part with cloud data flow pipeline is you can also reuse the same Cloud for both streaming and bch processing and Google will manage the work of starting running and stopping compute resources to process your pipeline in parallel the best part with cloud data flow pipeline is you can also reuse the same code for both streaming and bch processing and Google will manage the work of starting running and stopping compute resources to process your pipeline in parallel this reference
219:30 - 220:00 architecture which you can see here is like this covers use case in mustain I hope you have understood for now like cloud data flow and Cloud pops up how and what is cloud storage how it does it work for making a pipeline through cloud data flow right so please note that you have options Beyond cloud data flow to stream data to Big quy for example you can write streaming pipelines in Apache spark and run on a Hado cluster such as cloud data proc using Apache spark bigquery connector you can also call the
220:00 - 220:30 streaming API in any client library to stream data into bigquery then the third one is data transfer service the bigquery data transfer service GTS is a fully managed service to ingest data from Google software as a service apps such as Google ads external cloud storage providers such as Amazon S3 and transferring data from the data warehouse Technologies such as Terra dat and Amazon red shift DTS automates data movement into big query on a scheduled and managed basis DTS can be used for data back fills to recover from any
220:30 - 221:00 outages or gaps think of data transfer Service as an effortless data delivery service to import data from applications to Big Perry let's now look at the pricing criteria and models of Google's bigquery so bigquery pricing has two main components so the first is analysis pricing which is the cost to process queries including SQL queries user defined functions scripts and certain data manipulation languages and data definition language statements that scan tables second is storage pricing which
221:00 - 221:30 is the cost to store data that you load into big query each project that you create has a billing account attached to it any charges incurred by big query jobs run in the project are built to the attached billing account the query storage charges are also built to the attached billing account you can view big query cost and Trends by using the cloud billing reports page in the cloud console so let's discuss the first one that is analysis pricing models B query
221:30 - 222:00 offers a choice of two pricing sub models for running queries under this model so the first one in this is on demand pricing with this pricing model you are charged for the number of bytes processed by each query the first 1 tbyte of the query data processed per month is free second is flat rate pricing with this pricing model you purchase slots which are virtual CPUs when you buy slots you are buying dedicated processing capacity that you can use to run queries slots are
222:00 - 222:30 available in the following commitments plans like Flex slots you commit to an initial 60 seconds second is monthly plan you commit to an initial 30 days then there's annual plan where you commit to 365 days with monthly and annual plans you receive a lower price in exchange for a longer term capacity commitment you can combine both models to fit your needs with on demand pricing you pay for what you use however your queries run using a Shar tool of slots
222:30 - 223:00 so performance can vary with flate rate pricing you purchase guaranteed capacity with a discounted price for a longer term commitment let's briefly understand the first one that is on demand analysis pricing so by default queries are built using the on demand pricing model with on demand pricing B query charges for the number of bytes processed you are charged for the number of bytes processed whether the data is stored in big query or in an external data source such as cloud storage drive or Cloud big
223:00 - 223:30 table on demand pricing is based solely on usage so this is uh how the on demand pricing structure looks like you can see like queries on demand are there then they are charged for $5 per TV the first one TV month is free and if you pay in a currency other than like us the price is listed in your currency on cloud platform sqs apply like here you can see the sqs like how you need to choose your currency and you can filter it by product sqa name or sqa ID or service region or service okay now let's discuss
223:30 - 224:00 the second one briefly that is flat rate pricing which is also available for high volume customers that prefer a stable monthly cost B offers flat rate pricing for customers who prefer a stable cost for queries rather than paying the on demand price per terabyte of data process to enable flat rate pricing use big query reservations when you enroll in Flat Rate pricing you purchase dedicated query processing capacity measured in big query slots your queries
224:00 - 224:30 consume the capacity and you are not built for byes process if your capacity demands exceeded your committed capacity B query will Q up slots and you will not be charged additional fees so there are flx slots which have like short-term commitments so flx slots are special commitment type so commitment duration is only 60 seconds you can cancel flx slots anytime thereafter you are charged only for the seconds your commitment was deployed so Flex slots are subjects to capacity availability when you attempt
224:30 - 225:00 to purchase Flex slots success of this purchase is not guaranteed however once your commitment purchase is successful Your Capacity is guaranteed until you can cancel it the following table shows the cost of your Flex slot commitment here you can see like how it hourly cost you can see like $4 for number of slots 100 and monthly charges $ 2920 US based on average of 730 hours per month then there are monthly flat rate commitments following table shows the cost of your monthly slot commitment you can see like
225:00 - 225:30 $2,000 is given for 100 slots then there's also annual flat rate commitments the following table you can see here which shows the cost of your annual SL commitment with you can see like monthly cost it's $700 only for number of slots all this for us and it's for multi region okay let's now move on to the second major pricing category that is storage pricing so storage pricing is the cost to store data that you load into big query you pay for active storage and long-term
225:30 - 226:00 storage so active storage includes any table or table partition that has been modified in the last 90 days and long-term storage includes any table or table partition that has not been modified for 90 consecutive days the price for storage for that table automatically drops by approximately 50% there is no difference in performance durability or availability between active and long-term storage here you can see like the first 10gb of storage per month is free and operation you can select like if it's active storage or
226:00 - 226:30 long-term storage then pricing is given for like per GB pricing is given like you can see for toate is20 per GB and for long-term storage it is 010 per GB so that's how you can see like for longterm like how the price is saved and in both of them you can see like the first 10gb is free each month now that you have a theoretical and Architectural understanding of Google big query service let's now see a practical implementation it by trying our hands on running it on Google Cloud
226:30 - 227:00 platform so you can just go to Google Cloud platform console we at console now so I already have have a Google Cloud platform account so if you don't have one create one it's a very good platform to have your account on also you have to give your credit or debit card details it will debit one rupee and that will also be refunded instantly and you will get $300 free credit for 90 days and you can use that credit for like the demo I'm showing you you can use for that also and for using various other services on
227:00 - 227:30 gole cloud platform you can see like my free trial is over but it's still like what the demo I'm going to show you it doesn't cost anything so yeah let's start so we can go to big query so Google big query with you will find it under big data services so let's see big data yeah here's Big Data and this is big query let's open it so what you have to do is you have to create a data set so you can just uh click here and you can create a data set
227:30 - 228:00 from here that's a project name you can say and under that we creating a data set so you can give a name for data set like data set ID so you can give it like demoore B query create data set then create a table in it yeah here's the option click on this data set created then create table so by clicking here so yeah everything is given this project name data set name project name yeah this project name is different okay
228:00 - 228:30 I will show you just a second so this the first step you have to create a project from here and that's a Google platform project like I have demo name project I have different other projects also you can create a new project from here okay then only start with big quy okay so you can create table under the data set so this is the project name demo then data set name demoore BQ me big query then the table name we can give like what we are going to do today we are going to import a public data set named stack Overflow I hope you might be
228:30 - 229:00 knowing about this stack Overflow website which have all the Technologies like the questions are posted and answers are given to that like people to engage in that so what we are going to find out is in last decade like which technology has been posted the most means the questions have been posted the most related to we technology okay so can just give it a name table name like top underscore we going to find the top 10 Technologies so topor 10or Tech we can give create a table in it so yeah
229:00 - 229:30 table is being created so here it is inside the project we have demo in the demo we have top 10 Technologies okay now what we have to do is we have to add data so we can go here add data we can p a project or you can insert a data set if you have an external data set very big like very small or big it depends on you okay what kind of data set you want to work but right now we are going with public data set so we can just import a public data set there are various data
229:30 - 230:00 set provided here you can see so you can see the latest one is about the covid-19 public data set where you have all the information about the patients and the how the cases are increasing and how the infected cases are increasing or if they're going down whatever the analytics is there regarding all the statistics and analytics whatever is provided with the data of covid-19 all those are provided here in which state or which country wise all those data are provided here okay so but we are going with st overflow so we will just type here St overflow yeah here it
230:00 - 230:30 is so you can see all overview is given like what St overflow is the largest online community for programmers to learn share their knowledge and Advance their careers we know all these things okay then there are samples like here are many examples ofl G you can run the data these are just the samples which are given the questions posted and the answer and all these things are there now we have to do is view the data set it will takes us to another big quy window back to the query page so yeah so we are here now the data set is been
230:30 - 231:00 imported St overflow is here so we can just go into St overflow and if you open it there are certain tables are there like badges commands post links there are multiple tables are there but we have to go for the questions like how many times the questions is posted so we will choose this table name post questions so as you open this let's extend this a little yeah so if you see this first of all the schema will be shown for the table like you can see like the features are given like the column names you can say column names of features you can say it so all the
231:00 - 231:30 features are given and the data types are given and they are like nullable or it's not nullable everything is given okay then there are the details of the data set you can see like the size of the data set is like really huge it's not small in any sense 33.4 GB is a lot and there are like you can see number of rows like 20 million rows are there then you can see the date for it like date it's got created on last modified and data location everything is given here and then you can see the preview also what does these column features name
231:30 - 232:00 actually okay so just a second the so you can see everything is given body and everything like accept answer ID comment these are all the features okay the column names but the most important is the text so we have this text you can see the data Explorer The View count for data Explorer keyword is two so in that way we have to query the we have to write a query for it to find out the top 10 Technologies okay so let's start so what we have to do is we have to query the table first now what we have to do
232:00 - 232:30 is we have to select so let's split TX we have to split first the Tex so working on this Tech column so we to split the TX tax from very all this limit we can remove from here 10 th limit is given so that we can remove and then we can give like from where we have to extract it so like if there is any date column is given so because we are finding for the last decade right so we find from the creation date okay there is this
232:30 - 233:00 creation date so we have to extract from the here we going to extract only the from the ear so so we are just choosing the year here here from creation date we are extracting creation date you have to give the flat and tex also so we have to select the flat and TX so can choose it from here select
233:00 - 233:30 tax and then we can choose a tag count so tagore count and then we will join it below okay from creation date we have to give so that's the problem now it won't be flatten Tex now we can just cross join it so cross join you can use the unest for it so you join the text with flat and join so that works out like a self join you can say so tax can Group by flatten tax so have
233:30 - 234:00 to group by and then we have to order by Tech count descending because we need the keyword with the the technology name with the maximum count so that will come first if we order by descending order so sending then we'll limit it to 10 because we want the top 10 one right so limit 10 now it can be done so the query will process 73.9 M it is showing so we can
234:00 - 234:30 just run the query now you can see all the queries are here now like all the Technologies are given so you can see like the Java script is topping the list so then there's Java then there's python you can see like there are 21 so you can see like 21 means I going to say like 2.1 million counts are there Tech counts are CSS at 10 C++ at 9 last decade you can see their Tech counts like how many times they are searched so we successful here now we can see the job information is given everything is given so what we
234:30 - 235:00 can do is we can just uh save this save results okay you have to save this in a table like the table we have created can you can also save it as CSV or Json file and everything like Google Sheets also but we will save it as bigy table so table name was toore 10or Tech saved as a table so now you can see like job history one running show the job history like it is been created the table is been cre like the result is been saved
235:00 - 235:30 as a table so the table is been successful so now we can see if you go here the project you have this data set then we have this Tech yeah so you can see the details for it like pent text and Tech count are there in the table then we can see the details like 148 by it takes Total Space table size and preview if you see like this table is saved here okay before saving the view let's understand this like you can also share this table or you can copy the table and delete table or also you can like export it to data Studio like
235:30 - 236:00 export with data studio and also you can export to Google Cloud Storage also all these things you can do and then you can like save view also if if you save this view like it shows on The View like how the table is working and how output it shows everything that can be done and then you can schedule the query how you want to run weekly or daily or monthly however you want to schedule it you can schedule it that way okay and uh then there is like you can also do the query settings so query engine you can choose
236:00 - 236:30 for it like cloud data query engine and also if you are like querying it and if you forgot to save it it can get saved in a temporary table that temporate table you can create it from here okay you can see here in that after this like all these things have been there job priority everything and destination table with preference all the things you can choose uh one more cool feature I want to show you and that can show you only I run the query again so let's run this query again so come down to execution details what you can see here
236:30 - 237:00 is how your queries are further splited into multiple queries to get the results so first thing you can say these are like the working nodes which is like a different concept of like another different concept of howu file system or Google file system to discuss about but that's how it divides into multiple queries so here you can see like first how it's weit read and compute and right everything is being divided and then you can see like first thing was like there are 20 million rows okay first step in input and then when it comes to sorting
237:00 - 237:30 it decreases down to 20 million to it decreases down to 2 million rows and then finally when the output comes you can see it been reduced down to 13 okay in the final stage when you have ordered by descending order everything and then you limit it to 10 it comes down to top 10 Technologies right so this is how it's been done I hope you have understood it now you can see how everything it provides at the dashboard only everything is every service it doesn't take time that's the thing about bigquery that it's very fast let's Now understand some of the use cases for
237:30 - 238:00 bigquery so the first one is migrating data warehouses to bigquery so you can like solve for today's analytics demands and seamlessly scale your business by moving to Google Cloud's modern data warehouse streamline also you can like streamline in the sense you can also streamline migration path from Neta or Oracle red shift ter dat or snowflake to Big query and accelerate your time to insights you can like also streamline your migration path from netza Oracle red shift teradata or snowflake to Big
238:00 - 238:30 query and accelerate your time to insights in the Second Use case is a Predictive Analytics so Predictive Analytics helps you predict future out comes more accurately and discover opportunities in your business so the Google's smart analytics means big queries Google bigquery smart analytics reference patterns are designed to reduce time to value for common analytics use cases with a small code and Technical reference guides you can like learn how bigquery and bigquery machine learning can help you build an e-commerce recommendation system also
238:30 - 239:00 like predict customers and lifetime value and design propensity to purchase Solutions also you can bring any data into big query make analytics Easier by bringing together data from multiple sources in big for seamless analysis you can upload data files from local sources Google drive or cloud storage buckets take advantages of bqu data transfer service data Fusion plugins or we can say like Leverage Google's industry leading data integration Partnerships you have ultimate flexibility in how you bring data into your data warehouse
239:00 - 239:30 let's now see a case study for Google bigberry that's the final topic we are discussing then we will wrap up so Safari Books online user big quy to solve a few key challenges like building detailed business dashboards to spot Trends and manage abuse improve Sal stream Effectiveness through sales intelligence and enable ad hoc quering to answer specific business questions so all these things we will understand step by step so what they did is mean the Safari what they did is they choose big query over other Technologies because of
239:30 - 240:00 the speed of querying using a familiar SQL like language and the lack of required maintenance so Safari Books online has a large and diverse customer based is constantly searching for accessing a growing library of over 30,000 books and videos from an increasing array of the desktop and mobile devices this activity stream contains powerful knowledge which we can use to improve our services and increase profitability logged up in the mountains of usage data Trends such as top users top titles and connecting the dots for
240:00 - 240:30 sales inquiries Safari's usage data was much too massive to query online in its entirety with the previous tool set analysis could be done with the third party web analytics tools such as home nature but those tools lacked the ability to query and explore record lbel data in real time in addition they didn't have a great backend for developing visualizations SQL like quaring had to be done on smaller chunks of the data and was labor intensive and slow they were impatient waiting for MySQL carries to finish and were often
240:30 - 241:00 in doubt or as to like whether or not they should finish at all once you reach a database client timeouts of 10 minutes you are are in the realm of not being able to do like ad hog analysis answering operational questions can be like time sensitive so speed is important of course it's very important so we played with Hardo but we in the sense I mean Safari organization has actually like played with Hardo but it proved to take a lot of resources to maintain so they ended up putting it on the back burner for future projects then
241:00 - 241:30 they learned about Google big query through a video from Google input and output determine it seemed perfect for their use case so they decided to try it out so first step is like you can see here this diagram this tells you how Safari get the data into big quy so no matter what database platform you use the extract transform load step can be a challenge depending on your data sources bigquery is no exception I would say then this diagram shows an overview of how the data flow through the landing and ETL servers leading up to bigquery
241:30 - 242:00 the usage data comes from content delivery networks mean CDN and web application server logs this data was packaged into time based badged chunks automatically copied into a Landing directory it needed some transformation to load into big query in a format where we could get the most use out of it here are the basic steps Safari went through so it got a list of all source files they have previously loaded into big Perry then validate the list of source files waiting to be loaded into big Perry then transform the files and copy
242:00 - 242:30 the files to bigberry cloud storage then load the files into bigberry and finally using the data once it is in bigberry so B proved to be a good platform for dashboard data and offers the capability of drilling into the data in the dashboard further with the bigy browser based UI for adoc analysis so here you can see like his dashboard used internally by Safari's operations team to monitor top users if anything raises any questions you can get more information about a user a title an IP
242:30 - 243:00 address by querying with the big query browser tool now you can see is dashboard Safari uses internally to keep an eye on Trend in top titles again if anything raises any questions in the dashboard bigquery can answer them for you the data is fed to the these dashboards by an intermediat schedule job which is executed via cron this script queries big query with the big query command line tool gets the result in Json and then contact some other web services to get more information which
243:00 - 243:30 is not stored in big query such as the user data and book thumbnail images then it is all matched up and presented in the the dashboard this is a good pattern as you wouldn't want to query big query every time the dashboard was loaded as it would get expensive it is best to do the analytical heavy lifting with a big quy then store the results in something like a cheap lamp stack for M consumption then the second advantage that Safari got through bqu is sales intelligence which is another use case
243:30 - 244:00 bigberry was great for cing safaris web blocks for leads which came into their sales department this allowed them to relate a lead to other accounts in their system or gge the level of interest someone may have in Safari by the number of previous books they had read in the past month on their site lead is created in Safari's CRM system and the bigberry as synchronously searches through its logs like this so you can see this in query like you can see this query written so
244:00 - 244:30 the result is return quickly and reattach to the lead recorded allowing Safari's sales department to gain intelligence and contact the lead without having to wait around the information this gives them a nice summary of the users the IP address of the lead has done while IP address are not perfect identifiers they are much better than nothing you can see someone from this lead ID you can see in usage history this how it is given so you can see someone from the lead's IP address has anonymously viewed user ID 0 232 pages of her books and some other users
244:30 - 245:00 who have accounts with Safari already are active on that IP rather than use the bigquery command line interface is this use case was better suited for server to server or we can say o with o o which is like the authorization tool Google uses so with the Google API client library for PHP this enabled them to create web service which matches up data from different sources and Returns the result to their CRM one of the sources is the big query one amazing Testament to the speed of big query is
245:00 - 245:30 that I didn't even have to implement the web service asynchronously it Returns the results from big query within the timeout period of the web service that the CRM requests other use case of safari which they took advantage of through using big query is pure ad hoc quaring which aren't driven by dashboards but rather specific business questions when Safari released the Android apps it was easy to run a query on our usage data on Safari's usage data to see the Android page grouped by day in the Google bigquery browser tool it
245:30 - 246:00 was easy to export the query results as a CSV file and load them into Google spreadsheets to make a chart so you can see here how this shows when they release their Android apps this is the graph and then here's an example of looking at their top users by number searches during a particular hour of the day there really is no limit to the kinds of insights you can gain from investigating your data from an adog perspective there is a huge sense of power being able to answer any question about your billions of rows and data in
246:00 - 246:30 seconds for example you can find out if that strange traffic on your site is a forum spam bot or a malicious user as recommended by Google they divided their data up into es you can see this in the table normally this would be a bit of an inconvenience to have to do a union statement between the tabl but bigquery has a nice shortcut for this so you can give this shortcut and what it will do is it will create a union between the tables so they just check on as many years back as they need to go into their
246:30 - 247:00 time shed tables and Union them together since they have made their schemas the same at the end of each year they Archive of active tables and start a new one this is the best of both worlds as it provides the accessibility of having all your data in one big table with the performance of having them separate as they grow it will make sense to further shed the data into monthly increments instead of early now the summary of this case study is like Safari like generated meaningful knowledge out of a massive amounts of data in a timely manner which
247:00 - 247:30 is a challenge for Safari Books online like many businesses needed to solve being able to keep on top of Trends as they happen while you can still do something about them instead of month after the fact is very valuable this leads to lower abuse problems and top line revenue increases by gathering marketing Intelligence on trending topics and collecting lead intelligence to close sales more effectively that was the summary of case [Music]
247:30 - 248:00 study what is Google kubernetes engine Google kubernetes engine provides a manage environment for deploying managing and scaling your containerized application using the Google infrastructure the GK environment consists of multiple machines specifically compute engine inance which are grouped together to form a cluster now a cluster is a foundation of Google kubernetes engine the kubernetes objects that represent your containerized application all run on top of your
248:00 - 248:30 cluster now to understand GK better let us understand its architecture and its working as you know all kubernetes objects that represent your containerized application run on top of a cluster it is a foundation of gke a cluster consists of at least one control plane and multiple worker machines called nodes these control plane and node machines run on kubernetes cluster orchestration system the control plane runs the control plane processes including the kubernetes API servers scheduler and core resource controllers
248:30 - 249:00 the control plane is responsible for deciding what runs on each and every node this can include scheduling workloads like containerized applications and managing the workloads life cycles scaling and upgrades the control plane also manages Network and storage resources for those workloads now all the interaction with the Clusters are done via kubernetes API calls and the control plane runs the kubernetes API server processes to handle those requests you can make kubernetes API calls directly via HTTP
249:00 - 249:30 grpc or indirectly by running commands from the kubernetes command line client that is Cube CTL or interacting with the UI in the cloud console now coming to the nodes now nodes are the worker machines that run your containerized application which is your containers and other workloads they run the services that are necessary to support the containers running in your workloads each node is managed from the control plane which receives update on each node's self-reported status next we have
249:30 - 250:00 something called the user Port now ports are the most basic and smallest deployment object in kubernetes it can contain contain one or more containers so now when the Pod runs multiple containers the containers are managed as a single entity and it shares the pods resources such as networking and storage they connected to various gcp services such as VPC networking persistent dis load balancer and other Cloud operations this was the architecture of GK now let us understand it's working
250:00 - 250:30 gke works with containerized applications these containers whether for application or jobs are collectively called as workloads and before you deploy this workloads on gke you must first package it into a container Now to create a continuous integration and continuous delivery pipeline you can use cloud code to write your application then send the code to a repository which launches a build process in Cloud build this build process builds container images from a variety of source code
250:30 - 251:00 repository these container images are stored in container registry and are ready to be deployed in the Google kubernetes engine you can then create a GK cluster using Cloud console UI gcloud command line interface or the apis after this with a few clicks you can deploy your application on gke now I guess you have some idea about Google kubernetes engine now let us move on to next topic and see some of the advantage of Google kubernetes Engine with Google kubernetes engine you gain the benefits of advanced cluster management features that Google
251:00 - 251:30 Cloud provides first is load balancing for compute engine instance Google Cloud offers server side load balancing so you can distribute incoming traffics across multiple virtual machine instances load balancer can scale your application support heavy traffic detect and automatically remove unhealthy virtual machine instances using health checks and Route traffic to the closest virtual machine this is a managed service which means its component are redundant and highly available so if a load balancing
251:30 - 252:00 component fails it is restarted or replaced automatically and immediately the next benefit is Auto scaling gk's cluster Auto scaler automatically resizes the number of nodes in a given note pool based on the demand of your workloads you don't need to manually add or remove nodes or over provision your note pools instead you specify a minimum and maximum size for the node pool and the rest is automatic the next benefit is auto updating Auto upgrades will help
252:00 - 252:30 you keep the nodes in your cluster up to date with the cluster control plane version when your control plane is updated with auto update you don't have to manually track and update your nodes when the control plane is upgraded on your behalf it also provides better security by automatically ensuring that security updates are applied and kept up to date with the latest kubernetes features the next benefit is monitoring and logging Google kubernetes engine includes native integration with Cloud monitoring and Cloud logging when you
252:30 - 253:00 create a GK cluster running on the Google cloud cloud operations for GK is enabled by default and provides a monitoring dashboard specifically tailored for kubernetes with Cloud operations for gke you can control whether or not Cloud logging collects application logs you also have the option to disable the cloud monitoring and Cloud logging integration Al together these were some of the benefits of Google kubernetes engine now let us move on to a demo part and see how to deploy a containerized application on
253:00 - 253:30 Google Cloud platform so in our demo we're going to package a sample web application into a docker container image and then run that container image on a GK cluster so for that I've logged in into a gcp account now it is very simple to create a new gcp account all you have to do is enter your debit card or credit card detail and give your address then you might be charged maybe a rupe for it but even that will be refunded later now as you sign into a new account gcp will provide you $300 of free credit you can use this
253:30 - 254:00 amount to explore Google cloud services you won't be charged until you choose to upgrade and it will be valid for 90 days so first let us create a new project so we'll just go here select a new project we can name a project anything we want so we'll just name it kubernetes let the rest be same and we'll just create it now a project is created we'll just
254:00 - 254:30 select it so for our demo we're going to use cloud shell now Cloud shell is online development and operational environment it can be accessed from anywhere with the browser the reason we using Cloud shell is we do not have to install any command line tools for a demo so we'll just click on activate Cloud shell over here then you can see a cloud shell termina which is being created at the bottom of a screen so the first step as I've
254:30 - 255:00 mentioned before is to build a container image for this tutorial we're going to deploy a sample web application called Hello app which is a web server written in go programming language it responds to all requests with the message hello world on port 8080 but before we deploy the Hello app to gke we must package the Hello app source code as a Docker image and to do that we need the source code and the docker file now the docker file contains information about how the image is built so first let us download the source code
255:00 - 255:30 and Docker file for the hello App application get loan https github.com Google Cloud platform kubernetes engine samples so the hello App application is available in this website so we've just mentioned the website URL over here just
255:30 - 256:00 click on enter so it says that the directory already exists so we get a message saying the directory already exist so we'll just change the directory next we have to set the project ID environment variable to the Google Cloud project ID you can find the project ID over here but I guess my project ID is the project ID environment variable so let's just confirm it so for that we'll just
256:00 - 256:30 type Echo dollar sign next you have to add project ID environment variable to your Google Cloud project ID for that you have to type export project ID is equal to you can find your project ID over here you can just copy it from here and paste it over here the project ID variable Associates the container image with your project container
256:30 - 257:00 registry now to confirm a project ID we'll just type Echo dollar project ID here you can see see this is a project ID after confirming the project ID it is time to build Docker image for the Hello app so for that we'll type Docker w space iph t space GCR doio SL
257:00 - 257:30 dollar project ID slash Hello app version one now this is the name given to a Docker image and gr refers to the container registry oh we just forgot one more thing over here that is space a DOT now it will ask you to authorize Cloud shell you can just select authorize over
257:30 - 258:00 here you can see a Docker image is being created we'll just extend this now you can see a Docker image was successfully built now to just confirm if a Docker image was built we'll just type Docker images now you can see our Docker image is built it was created a minute ago and the size of it after this we have to push our Docker image to The Container registry so GK clusters can download it
258:00 - 258:30 and run the container image so for this we're going to enable the container registry API we'll just type gcloud Services enable container registry do googleapis.com next we have to authenticate to container registry by configuring the docker command line tool for that we're
258:30 - 259:00 going to type gcloud authenticate configure Docker now this is just a warning message saying that we have a lot of credential and it might be slow so now we have authenticated our container registry now we have to push the docker image to The Container registry so for that we're going to type Docker push the name of our Docker
259:00 - 259:30 image now a Docker image is being built in the container registry now you can see a Docker image is pushed to our container registry now the next step is to create a Google kubernetes engine cluster so for this we're just going to minimize it and just go to kubernetes you can either select kubernetes engine from here or you can just type kubernetes over here
259:30 - 260:00 kubernetes engine we just click on enable over here now this might take a couple of seconds you can scroll down and read a overview about it you can read about Google and the various other products which are offered by them now this is what the Google kubernetes engine console would look like so it will ask us to First create a kubernetes cluster so we'll just go ahead and create one now we have two options over here one is standard and the other one is autopilot in standard you manage your clusters underlying
260:00 - 260:30 infrastructure which will give you node configuration flexibility and in autopilot GK provisions and manages the Clusters underlying infrastructure so we'll just go with standard over here and click on configure so we can name our cluster whatever we want we'll just name it cluster Hello app you can select any Zone from here these are the available zones so
260:30 - 261:00 we'll just select maybe Us West one now you can select the Zone which is nearest to you you can select the version we just go to static version and just create a cluster now creating a cluster might take a couple of minutes so we'll wait for some time and refresh it to see if it is created or not you can see there is a green tick
261:00 - 261:30 over here which means a cluster is successfully created we've already pushed the docker image to a container registry the next step is deploying the sample application to Google kubernetes engine for this step you have to create a kubernetes deployment to run the application on the cluster and also create something called the horizontal pod autoscaler which scales the number of PODS it can be anywhere from 1 to 5 based on the CPU load so for that we'll go to workloads and click on deploy we have a existing container
261:30 - 262:00 image so we'll just select it go to select this is a project ID and this is the name of the docker image so we'll just expand it and select this our Docker image is selected we click on continue now we have to configure it so we can name our application anything we want so we'll just name it demo application we let the name space be default next we have label which is basically for identification so let the
262:00 - 262:30 key be app and the value be demo application we not make any changes in that and this is our cluster on which the demo application is going to be deployed now when we see the Y okay so we have a error here we cannot use uppercase so we'll just make it lower case demo application now when we see the yl file you can see two kubernetes API resources about to be deployed into your clusters the first one is
262:30 - 263:00 deployment and next is the horizontal p autoscaler next we'll just click on deploy and our deployment is being created now this might take only a couple of seconds here you can see the CPU usage memory usage and a dis usage but as of now there is no data available now if you go to details you'll see a cluster name the name space which I mentioned when was it created and at what time the
263:00 - 263:30 labels how many replicas were made there were three replicas which are made so after the deployment we're going to expose the sample application to the Internet so for that we have a option here called as expose so we'll just click on this in the Target Port will select 8080 now this is the port the Hello app container listens to as I mentioned at the starting of the demo the service type Let It Be load balancer itself and we can name our service anything we want Let It Be demo application service itself and we're just going to click on
263:30 - 264:00 expose over here now a new service has been created and is waiting for a load balance of with an external IP to be created now this might also take a couple of seconds now a service has been created so when we go to details you can see the cluster name the name space when was it created the label the ports the type load balancer and external endpoint so will just click on this or you can just copy this and paste it in the new tab so
264:00 - 264:30 when we click on this we are directed to a sample application so here is a sample web application now you see a sample application is exposed to the internet through kubernetes service and here is the hello world message with the version and the host [Music] name let us now see the reasons why one should consider terraform so terraform lets you Define infrastructure in configuration code and will enable you
264:30 - 265:00 to rebuild and track changes to infrastructure with ease so terapon provides a high level description of infrastructure which means it's infrastructure descriptive second it has a lively community and is open source there is a massive Community developing around this tool many people are already using it and it's easier to find people who know how to use it like plugins extensions professional support Etc this also means terraform is evolving at a much faster rate they do releases very often so third is like Speedy operations
265:00 - 265:30 so terraforms speed and operations are like except one cool thing about tform is its plan command lets you see what changes you are about to apply before you apply them code reuse features and terraform tends to make most changes faster than similar tools like cloud formation also it is like the right tool for infrastructure management as many other tools have a severe impedence mismatch from trying to Wrangle an API designed for configuring management to control an infrastructure
265:30 - 266:00 environment instead terraform matches correctly with what you want to do the API aligns with the way you think about infrastructure also like terraform is the only sophisticated tool that is completely platform agnostic as well as supports other services while there are a few Alternatives but they are focused on a single cloud provider there's declarative code so terraform enables you to implement all kinds of coding principles like having your code in Source control the ability to write automated test Etc now that we have
266:00 - 266:30 understood why one must choose tform now let's briefly understand what ter form actually is terraform is a configuration orchestration tool designed to provision the servers themselves it refers to Arrangement and coordination of automated task resulting in a Consolidated process of workflow so it is also like open source tool means open source software tool created by Hashi cop so Hashi cop created terraform to manage present as well as popular service along with the custom in-house Solutions so terraform lets you
266:30 - 267:00 provision Google Cloud Resources with declarative configuration files resources such as virtual machines containers storage and networking terraforms infrastructure is a code that is I approach supports devops best practices for change management which lets you manage terraform configuration files in Source control to maintain an ideal provisioning state for testing and production environments terraform manages external Resources with providers external resources like public Cloud infrastructure private Cloud infrastructure Network appliances software as a service and platform as a
267:00 - 267:30 service hop maintains an extensive list of official providers and can also integrate with Community developed providers users can interact with terraform providers by declaring resources or by calling data sources rather than using imperative commands to provision resources terraform uses declarative configuration to describe the desired final State now let's look at some tools for using terraform with Google Cloud we will briefly understand these tools so there are like variety of tools you can use to optimize your terraform experience so first let's
267:30 - 268:00 explore the cloud Foundation toolkit which provides a series of reference modules for terraform the modules reflect Google Cloud best practices and using these modules help you get started with terraform more quickly the modules are documented in the terraform registry and open sourced on GitHub so these are some of the features of cloud Foundation toolkit so first is redeemed templates so the cloud Foundation toolkit provides a series of reference templates for deployment manager and terraform which reflect Google Cloud best practices
268:00 - 268:30 these templates can be used of the Shelf to quickly build a repeatable Enterprise radio Foundation in Google Cloud this frees you to focus on deploying your applications in this Baseline secure environment and with infrastructure as a code you can easily update the foundation as your needs change second like you can treat your infrastructure like software so through the open source templates you can automate repeatable task and provision entire environments in a consistent fashion plus your teams can collaborate on the infrastructure by
268:30 - 269:00 participating in code reviews and suggesting source code changes third it's like like built for Enterprise so the cloud Foundation toolkit is designed especially to meet the compliance and security needs of Enterprises by creating a foundational environment using these templates you can be confident that best practices are implemented out of the box including key security and governance controls also you can like maintain consistency easily by adopting the toolkit you can be confident that different teams are
269:00 - 269:30 deploying their applications and environments using a consistent set of tools and patterns this reduces the potential for misconfigurations and inconsistencies while allowing easier collaboration across different teams also you can choose your adoption strategy each template from the cloud Foundation toolkit can be used independently you can choose which patterns make sense for your organization and add new ones as your environment evolves the open-source templates can easily be fogged and
269:30 - 270:00 modified to suit your organization's needs lastly you can save time time and resources with pre-u templates with the cloud Foundation toolkit you don't need to spend time developing your own templates and patterns for Google Cloud instead you can build an open source templates and focus only on the customizations which are required to your company and workloads developers can like move faster and migrations are less time consuming because of it so the second tool is terraform validator so leverage terraform validator to enforce
270:00 - 270:30 policies on terraform configurations for Google Cloud so terraform validator is a tool for validating compliance with organizational policies prior to applying terraform plan it can be used either as a standalone tool or in conjugation with foret or other policy enforcement tooling terraform validator relies on policies that are compatible with config validator note that using terraform validator does not require an active installation of foret terap
270:30 - 271:00 validator is a self-contained binary so you can see the foret config validator shown here with which is the newest addition to the 4 City security toolkit so config validator helps Cloud admins put guard rails in place to protect against misconfigurations in Google Cloud platform environments this allows developers to move quickly and give security and governance teams the ability to enforce security at scale so how does a terraform validator works so Cloud admins write security and governance constraints as yml files once
271:00 - 271:30 and uh store them within their company's dedicated G repo as a central source of truth then for inest constraints and uses them as a new scanner to monitor for violations then terraform variator reads the same constraints to check for violations before provisioning in order to help prevent misconfigurations from happening that's how the whole process of terraform validator works the next tool we have is terraformer so by importing existing Google Cloud
271:30 - 272:00 resources into terraform with terraformer it is a command line interface tool that generates TF or Json and and TF State files terraform already has existing resources in your environment performing the reverse of what terraform is designed to do this tool can be thought of as infrastructure to code next and the last tool is a cloud shell so terraform is integrated with Cloud shell and Cloud shell automatically authenticates terraform letting you get started with less setup using Cloud shell you can manage your infrastructure and develop your
272:00 - 272:30 applications from any browser Cloud shell is an online development and operations Eno M accessible anywhere with your browser you can manage your resources with its online terminal preloaded with utilities such as the gcloud command line tool you can also develop build debug and deploy your cloud-based apps using the online cloudshell editor so some of the features of cloud shell are like full power access from anywh you can manage your Google Cloud Resources with the flexibility of a linx shell Cloud shell provides command line access to a virtual machine instance in a terminal
272:30 - 273:00 window then it has a developer ad envirment so you can develop your apps directly from your browser with the cloud shell editor is streamlined to increase your productivity with features such as go Java nodejs Python and C language support and integrated debugger Source control refactoring and a customizable interface run your app on the cloud shell virtual machine or in the mini Cube kubernetes emulator preview it directly in your browser then commit changes back to your repo from
273:00 - 273:30 your gr clients then the next feature is like your favorite tools pre-installed and up to date so many of your favorite command line tools from bash and Sh to IMX and Vim are already pre-installed and kept up to date with Cloud shell admin and development tools such as command line tool MySQL kubernetes Docker mini Cube are configured and ready to use no more hunting around for how to install the latest version and all its dependencies just connect to Cloud shell and go so the last feature
273:30 - 274:00 is cloud code tools for maximizing development productivity with this you can EAS easily develop cloud based applications with the tools provided by our Cloud code extensions allowing you to develop and deploy your kubernetes and Cloud run applications manage your clusters and integrate Google Cloud apis into your project all directly from the cloud shell editor so now let's understand like the support that terraform provides for gcp the code terraform command line interface is developed by hashiko use the following
274:00 - 274:30 resources for support like for provider related issues open and issue on GitHub for questions about terraform in general and common patterns check the hashiko community portal for General travel shooting advice see terraforms debugging documentation and uh join in the Google Cloud Community SL terraform Channel if you haven't already you can like register for the Google Cloud Community slack now that you have a theoretical understanding of terraform with gcp let's demonstrate it practically with an
274:30 - 275:00 example of launching a virtual machine instance in gcp through terraform so the first step is to download the Google Cloud SDK installer so just search out Google Cloud SDK so go here so you can just directly go to the install section here yeah yeah so yeah here like if you have a Linux OS you can go to Linux and follow the steps to download or you can go for ma OS also because I have windows so I will start
275:00 - 275:30 download directly from here so I got it downloaded okay open takes a few seconds yeah let's go next next next so I already have SD installed so I don't have to continue but if you don't have SDK installed so do continue like get the installation complete so yeah the next step is to download your terraform so you can just directly go to releases. hop.com terraform so hash cop as I've told you is the company that created terraform so from here like all
275:30 - 276:00 the versions are given a lot of versions are there so you can just go and download one yeah if you have Linux OS you can just copy the address from here and uh unzip it okay that's what you have to do so if you Windows user you can just download it from here yeah it got downloaded download it twice so you can delete this one yeah so can just extract it from
276:00 - 276:30 here yeah got it just copy it and paste it somewhere where where you are going to like place other files like separately like I have a different place to copy so I will just go to my section like not here just a second okay I will just go to my place like because I'm going to provide the path also I'll keep that in mind to place it separately the demo section in the terraform yeah like this one I have got it before so I will delete this one and I will P the new one
276:30 - 277:00 yeah and I will name it as terraform only so the next step is to go to the build infrastructure so here go to the Hashi cop website from here you will go to the tutorial section of Google Cloud okay the pre requests are given like you can create you have to create the account on Google Cloud platform so if you don't have an account create one it's a like it's a very good platform to have your account on and it provides a free trial like it's given there free trial so free trial account is provided for 90 days with $300 credits and when
277:00 - 277:30 you're going going to register it plus ask for your debit or credit card details and it will just deit one rupe and that will also be refunded very soon also like $300 credit is provided so that is like a lot of credits even means the exercise which I'm going to provide you it's like it doesn't cost anything it will only cut around $15 $20 not more than that or maybe maximum $50 that's it so that's how you can use it so next Cloud shell Cloud shell I've already explained you how it works and what are its features then is set up the gcp so
277:30 - 278:00 what you have to do in the gcp is you have to create a project then you have to create a service account and you have to generate a key okay so let's go to the gcp console so this is how the gcp console like Google Cloud platforms console looks like okay from here you go here and create a project so you can create a project from here but I already like created the project my project name is demo so I will use this project only so now what I have to do is I have to make a service account so I will type service account and yeah this is the default one
278:00 - 278:30 and this is the one I have created recently so I will create a new one with the name TF demo service account we will name it as sa okay so create and continue now for Access purpose you have to give the role so like different access options are provided for like different services are G you can provide like for a specific service you can provide the access like app engine or yeah notebooks and everything automl CA
278:30 - 279:00 service Cloud asset cloud data labeling but specific service which for whatever purpose you want to give you can give it specifically for this service account I want a complete access so I will just go to the project and in project I will give the owners to get the complete access so yeah continue then so yeah the service account is created for T demo essay okay for the next step you can go to the regist tform that I have already opened okay in this you have to go to the getting started with Google cloud provider you will get the code for here
279:00 - 279:30 the basic code for virtual machine launching the instance simple this one it's a basic code of terraform okay so if you're going to use AWS now so in AWS you will just use instead of Google you will use AWS that's it all the things will be same and similarly the provider code is provided below so just copy this code also and make a provider file and for this create a VM file like I have created one so this is how I have copied it here like this is the VM the basic
279:30 - 280:00 codes and then the provider code is provided here so you can see in VM like credentials are given project and then region is given everything and similarly like the provider in this the name like the virtual machine instance that will be created now it will be named by flask VM and whatever the ID it will be there so in that name the virtual machine will be created similarly machine type is given and zone is given also one more thing so I have to generate a key in this like this I created now I have to generate the key I like remember I told you in this like the third step is to
280:00 - 280:30 create a service account key so can go here like the account we have created in that we have to create a key so manage keys and head key create new so in Json file you have will be created from here so yeah got created so we can just go to downloads yeah so it is here so we can just copy here and you will just paste it where we have all the tform files and name it as TF underscore rization underscore sorry tore
280:30 - 281:00 demoore authorization this is what I have named in the code also that's why I'm naming this so yeah TF demo authorization. Json so that's what it is same so coming back so yeah all the codes and everything is done so now what we have to do is take that so it's the key is active now you can see so what we have to do is now open the command prompt you just have to provide the path
281:00 - 281:30 first so the path is a CD demo SL tform so yeah the path is here then we have to revoke the authorization so gcloud authorization purpose for gcloud authorization revoke one more thing I will show you the Json file if you see everything is given like privately called project ID is given private key ID is also given from the name the virtual machine will be launched now the key will be given this clask ID in the key then private
281:30 - 282:00 key is given then client email client ID authorization to toen everything is given in this okay so we got this authorization credentials are here so after that we have to configure so gcloud config configurations list so yeah though everything is right in my account but if you want to change so what you can do is you can initialize this so first clear this one so you can
282:00 - 282:30 initialize it for changing anything in your configuration list okay so gcloud oh sorry gcloud initialize init so yeah for initialize give the one choice numic number one it's been done then would you like to log in yes I want to log in just have to log in from here and allow it's authenticated it will be done now so
282:30 - 283:00 just come back you can choose the project from here so your project name is if you remember it's Chrome sensor 31344 okay so it is Chrome sensor so it's second and I think then ah yeah the zone and region we have to select so yeah we want to select so my region is south Asia yeah so I will select 37 South Asia 1 so yeah everything is done now so we can just again configure now so yeah
283:00 - 283:30 everything is at Place account yeah so last time if you see like I was saying like everything is right but it last time also like everything wasn't right because in the place of account now my account wasn't selected so it wouldn't have worked do check it properly then just clear this m then let's start with the terraform initialization terraform in it we have to go with I think yeah it's getting initializing so yeah it's got initialized and we go with plan so tform sorry
283:30 - 284:00 ciz I have already initialized that sorry so terraform plan now so yeah so it will be like create so if like got done but can go so you can see like go Compu instance will be created now okay so what we have to do is we just have to apply so here the command is terraform apply only yes will be accepted so yeah
284:00 - 284:30 yes yeah so it's got created so let's check let's go go to the virtual machine instance F engine virtual machine instances the flask will be created with the name flask or some ID will be given clask VM ID something I hope it get created will be created because it's showing so course it got created yeah CL VM in the private ID so yeah that's how it got created and if you want to delete it you can just give the command of destroy okay so you can just come here
284:30 - 285:00 you can give dataform destroy only yes is accepted so yes it takes a few second so yeah so it got destroyed let's check now it is here so we just refresh it you will find that it will get deleted from here yeah so it got [Music] deleted let us move on to next topic and
285:00 - 285:30 understand what exactly is gcp secure Command Center security Command Center is a security and risk management platform provided by Google Cloud it is an intelligent risk dashboard and analytics system for surfacing understanding and remediating Google Cloud security and data risk across an organization in simple words it is an established security and risk database for Google Cloud now the security command center help security teams gather data identify threats and act on
285:30 - 286:00 them before they result in business damages or loss it offers deep insight into application and data risk so it can quickly reduce the threats to your Cloud resources across your organization and evaluate the overall health security Command Center provides a single centralized dashboard so you can view and monitor an inventory of your Cloud Assets Now assets are nothing but your resources like organizations your projects your instances and your applications it can also help in scanning storage system for the
286:00 - 286:30 sensitive data it can be used to detect common web vulnerability and anomalous Behavior with security Command Center you can also review the access right to the critical or important resources in your organization and along with that you can follow the recommended actions to resolve the vulnerability present now I guess you have some idea about what exactly is the security Command Center so now let us see how does the security Command Center work in order to understand it better security Command
286:30 - 287:00 Center enables you to generate Q rated Insight which provides a a unique view of incoming threats and attached to Google Cloud resources which are called as Assets Now assets are nothing but resources like organization projects instances and application the security Command Center asset Discovery runs at least once a day you can also manually rescan on demand from the security Command Center assets display then the security Command Center displays POS security risk that are associated with each asset the possible security risk is
287:00 - 287:30 called as findings now this findings comes from security sources that include security command centers built-in Services third party partners and your own security detectors and finding sources now this was the working of security command center now let us take a look at some of its prominent features the first one is you can gain centralized visibility and control security Command Center gives you a centralized visibility of the number of projects you're using what resources are deployed and you can also manage which
287:30 - 288:00 service accounts has been added or removed the second feature is you can fix misconfiguration and compliance violation with security Command Center you can identify security misconfigurations and a compliance violation in a Google Cloud assets after you've identified these vulnerabilities you can resolve them by following the actionable recommendation which is provided by Google Cloud platform the third prominent feature is thread detection you can detect threat using the logs running into Google Cloud at
288:00 - 288:30 scale you can detect some of the most common container attacks including suspicious binary suspicious library and reverse shell we can also identify threats like cryptography mining anomalist reboots and suspicious Network traffic with built-in anomaly detection Technologies which are developed by Google itself next after threat deduction we have threat prevention with security Command Center you understand the security state of your Google Cloud assets you can uncover common web application vulnerabilities such as
288:30 - 289:00 cross-side scripting or Out rated libraries in a web application that are running on either Google app engine or Google kubernetes engine or Google compute engines then you can quickly resolve this mixed configuration by clicking directly on the impacted resources and following the procedure steps on how to fix it the last feature is sensitive data identification with security Command Center you can find out which storage bucket contains sensitive and regulated data using Cloud DLP you can also prevent unintended exposure to
289:00 - 289:30 the storage buckets and let only the authorized person access it so these are some of the features of cloud security Command Center let us move on to the next topic and see what is cloud armor Google Cloud armor protects your application and website against denial of servers and other web attacks you can use Google Cloud Armor security policies to protect your application running behind a load balancer from distributed denial of service or DDOS and other webbased attacks and your application could be deployed anywhere whether on
289:30 - 290:00 the Google cloud or in a hybrid deployment or in a multicloud architecture so this was the definition of cloud armor now let us understand how does cloud armor Works Google Cloud armor DDOS protection is always on inline scaling to the capacity of Google's Global Network so it is able to instantly detect and reduce Network attacks in order to allow only well formed request through the load balancing proxies with Google Cloud Armor security policies you can also allow or deny access to your external
290:00 - 290:30 https load balancer at the Google Cloud Edge which is as close as possible to the source of incoming traffic this helps to prevent unwanted traffic from consuming resources or entering your VPC Network now let us take a look at some of the features of Google Cloud armor the first feature is IP waste and Geo based Access Control you can filter your incoming traffic based on ipv4 and IPv6 addresses or cidrs we can also enforce Geographic based Access Control to allow or deny
290:30 - 291:00 traffic based on Source geographical location ation using Google goip mapping the next feature is adaptive protection Cloud armor automatically detects and help reduces high volume DDOS attacks with a machine Learning System trained locally on your applications the last feature is pre-configured web application firewall rules Cloud armor comes with the outof thee boox rule set based on industry standard to reduce the common web application vulnerabilities and help provide protection from various
291:00 - 291:30 web attacks so this was a about Cloud [Music] armor first let's understand what Google Cloud am is so am which means identity and access management lets you grant granular access to specific Google Cloud resources and helps prevent access to other resources IM am lets you adopt the security principle of lease privilege which states that nobody should have more permission than they actually need
291:30 - 292:00 let's Now understand how how Google Cloud IM works first let's take a real time example or say real life scenario suppose if you enter a company and uh you got the visitor card so you have a very limited access through it like you can access reception you can access cafeteria you can access a Lobby where you can rest okay now suppose you got selected in that company okay you got selected as a fresher in the position of Junior analyst you can say now you have
292:00 - 292:30 much more access than cafeteria reception and Lobby now you have access to certain databases you have access to certain maybe Cloud resources also a little bit but whatever is required for you for analysis purpose also you can have access to various analytical tools also but you don't have complete access to all of them suppose now you are working under a senior research analyst that senior research analyst or you can say senior analyst have much more access than you because you are a junior analyst senior analyst have much more access to more Cloud resources he has
292:30 - 293:00 more access to much more Excel sheets also so you can say and also like various other analytical tools also now suppose there's another person who isn't an analyst team he's in Cloud team say he is a cloud engineer so now you can see he might having the access of a lot of cloud resources compared to you but he won't be having access to certain database Services which you have access to certain databases also which you have access to a certain analytical tools also which you have access to he might not be having that access so that's how
293:00 - 293:30 a certain rights and certain accesses are being provided to different teams and depending on their work this is just you can say that as a real time example okay that's how access and I entity management works so now let's understand how IM IM Works in Google cloud with I IM you manage Access Control by defining who means identity has what access means role for which resource for example computer engine virtual machine instances Google kubernetes engine clusters and cloud storage buckets are
293:30 - 294:00 all Google Cloud resources the organizations folders and projects that you use to organize your resources are also resources in IM IM permissions to access a resource isn't granted directly to the end user instead permissions are grouped into roles and roles are granted to authenticated members and IM policy defines and enforces what roles are granted to which members and this policy is attached to a resource when an authenticated member attempts to access
294:00 - 294:30 a resource I am checks the resources policy to determine whether the action is permitted you can see this following diagram illustrates permission Management in I am this model for access management has three main parts the first part is member so a member can be a Google account you can say for end users and a service account we can say for apps and virtual machines and then there is a Google group or a Google workspace or Cloud identity domain that can access a resource the identity of a member is an email address associated
294:30 - 295:00 with the user service account or Google group or a domain name associated with Google workspace or Cloud identity domains then there is a role so a role is a collection of permissions permissions determine what operations are allowed on a resource when you grant a role to a member you grant all the permission that the role contains okay and then there is policy so the IM policy is a collection of role Bindings that bind one or more members to individual roles when you want to Define
295:00 - 295:30 who has what type of access means Ro on a source create a policy and attach it to the resource now let's understand the concepts in I am first let's understand the concepts related to Identity so in IM am you grant access to members and members can be of the following types like it can be a Google account which represents a developer and administrator or any other permission who interacts with Google Cloud account can be an identity including gmail.com or other domains new users can sign up for a
295:30 - 296:00 Google account by going to the Google accounts sign up page then there is service account a service account is an account for an application instead of an individual end user when you run code that's hosted on Google Cloud the code runs as the account you specify you can create as many service accounts as needed to represent the different model components of your application okay then there is Google group so a Google group is a named collection of Google accounts and service accounts okay and every
296:00 - 296:30 Google group has a unique email address that's associated with the group you can find the email address that's associated with the Google Group by clicking about there's this option about on the homepage of any Google group Google Groups are a convenient way to apply an access policy to a collection of users you can like Grant an access access controls for a whole group at once instead of granting or changing access controls one at a time for individual users or service accounts you can also easily add and remove members from a
296:30 - 297:00 Google group instead of updating an IM am policy to add or like remove users Google Groups don't have loging credentials and you cannot use Google Groups to establish ID entity to make a request to access a resource then there is Google workspace domain Google workspace domain represents a virtual group of all the Google accounts that have been created in an organization's Google workspace account then Google workspace domains like represent your organization's Internet domain name such as example.com you can just for example so I'm just naming it as example.com and
297:00 - 297:30 when you add a user to your Google workspace domain a new Google account is created for the user inside the virtual group such as like you can take it as username at the rate example.com like Google Groups Google workspace domains cannot be used to establish identity but they enable convenient permission management then the fifth one is cloud identity domain a cloud identity domain is like Google workspace domain because it represents a virtual group of all Google accounts in an organization however Cloud identity domain users
297:30 - 298:00 don't have access to Google workspace applications and features then the next one is all authenticated users so the value all authenticated users is a spatial identifier that represents all service accounts and all users on the internet who have authenticated with a Google account so this identifier includes accounts that aren't connected to a Google workspace or Cloud identity domain such as personal Gmail accounts users who aren't authenticated such as Anonymous visitors aren't included and the last one is all users so the value
298:00 - 298:30 all users is a special identifier represents anyone who is on the internet including authenticated and unauthenticated users also now moving ahead to the concepts related to access management when an authenticated member attempts to access a resource IM IM checks the resources IM policy to determine whether the action is allowed or not so the first one in this is resource if a user needs to access to a specific Google Cloud resource you can grant the user a role for the resource some examples of resources are projects
298:30 - 299:00 computer engine instances and cloud storage buckets some Services support granting I am permissions at a granularity finer than the project level for example you can grant the storage admin role to a user for a particular cloud storage bucket or you can grant the compute instance admin role to a user for a specific computer engine instance in other cases you can grant IM am permissions at the project level the permissions are then inherited by all resources within that project for
299:00 - 299:30 example to Grant access to all cloud storage buckets in a project Grant access to the project instead of each individual bucket or to Grant access to all computer engine instances in a project Grant access to the project rather than each individual instance and the next one is permissions so permissions determine what operations are allowed on a resource in the I am World permissions are represented in the form of service. resource. verb for example if you are using pubsub then you can give it as pubsub do subscriptions.
299:30 - 300:00 consume that becomes service. source. right permissions often correspond one to one with rest API methods that is each Google cloud service has an Associated set of permissions for each rest API method that it exposes the caller of that method needs those permissions to call that method for example if you use pubsub and you need to call the topics. published method you must have the pubsub topics published permission for that topic you don't Grant permissions to users directly
300:00 - 300:30 instead you identify roles that contain the appropriate permissions and then Grant those roles to the user so the third one is roles a role is a collection of permissions you cannot Grant a permission to the user directly instead you grant them a role then when you grant a role to a user you grant them all the permissions that the use role contains so there are several kinds of roles in IM am like there are basic roles roles historically believe in the Google Cloud console these console are owner editor and viewer cautiously
300:30 - 301:00 remember that basic roles include thousands of permission s across all Google cloud services in production environments to not Grant basic roles unless there is no alternative instead Grant the most limited predefined roles or custom roles that meet your needs then there are predefined roles so roles that give finer gr Access Control than the basic roles for example the predefined role pops up publisher provides access to only publish messages to a pops sub topic only then there are custom roles so roles that you create a
301:00 - 301:30 tailor permissions to the needs of your organization when predefined roles don't meet your needs like you can see here an example of role is given like the role is assigned as a compute. instance admins like the role is assigned compute. instance admins so under this role you have permissions to access all the resources under this role like compute. instance. delete or like compute. instance. getet everything is you can access now the next one is IM am policy so you can grant roles to users by creating an IM IM policy which is a
301:30 - 302:00 collection of State statements that Define who has what type of access a policy is attached to a resource and is used to enforce Access Control whenever that resource is access as you can see here so an IM am policy is represented by the IM am policy object and IM am policy object consists of list of role bindings a role binding binds a list of members to a role I am provides a set of methods that you can use to create and manage Access Control policies on Google Cloud resources these methods are
302:00 - 302:30 exposed by the services support am for example the IM am methods are exposed by the resource manager pup sub and Cloud Life Sciences apis just to name a few like the IM methods are set IM policy which sets policies on your resources then there is get IM policy which gets a policy that was previously set then there is test IM permissions that sets then there are test IM permissions which test whether the caller has the specified permissions for a resource or not now moving on to the next and the last one that is resource hierarchy so
302:30 - 303:00 Google Cloud resources are organized hierarchically like the organization is the root node in the hierarchy then there are folders which are children of the organization then projects are children of the organizations or of a folder and the last one like resources for each service are descendant of projects each resource has exactly one parent so this following diagram is an example of a Google Cloud resource hierarchy you can set an IM policy at any level in the resource hierarchy the
303:00 - 303:30 organization level the fter the project level or the resource level resources inherit the policies of all their parent resources the effective policy for a resource is the union of policies set on that resource and the policies inherited from higher up in the hierarchy this policy inheritance is transitive in other words like resource inherit policies from the project which inherit policies from folders which inherit policies from the organization therefore the organization level policies also apply to the resource level for example
303:30 - 304:00 in this diagram topic underscore a is a pop sub resource that lives under the project example prod Suppose there is a account name Mika example.com and if you grant the editor role to Mika example.com for resources for example production and Grant the publisher role to song at example.com for topicor a you effectively Grant the editor role for topicor a to Mika example.com and the publisher role to song example.com the policies for child resources inherit
304:00 - 304:30 from the policies for their parent resources for example if you grant the editor role to a user for a project and Grant the viewer role to a same user for a child resource then the user still has the editor role grant for the child resource if you change the resource hierarchy the policy inheritance changes as well for example moving a project into an organization causes the project to inherit from the organization's IM am policy now let's understand the Practical actionable settings you can modify in the I am which will greatly
304:30 - 305:00 improve the security of your project so the first one is uh you can enforce multiactor authentication you can say it as MFA which is a method where not only is one piece of information used to authenticate a user example password but there's also at least one additional source of proof needed to establish that the right person is accessing a system on Google Cloud platform users authenticate themselves using Google accounts these can be individual email addresses registered as a Google account or more commonly like accounts of a Google suit domain on the Google Cloud
305:00 - 305:30 platform site you cannot enforce that the Google accounts that have access to your project must have multiactor authentication enabled means MFA enabled but if you only Grant access to users from your Google suit domain then the Google suit domain administrator can set up MFA on the Google suit domain in a way that forces everyone to use it if you need to give access to people without accounts in your Google suit domain then you can create accounts for them in your Google suit domain for the
305:30 - 306:00 sole purpose of accessing your project this way you can enforce settings on their accounts if you combine both these rules then you can be certain that every user who has access to the Google Cloud platform project needs to validate themselves using MFA means multifactor authentication this makes it much harder to compromize your project even if the password for an email address leaks from another source second thing is you can set up password policy for users the password policy settings are technically not inside the Google Cloud platform but
306:00 - 306:30 at the discretion of the Google suit domain administrator if you only allow users from your domain and the domain is set up with the right password policy then these two things combined will result in the password policy being enforced on all your Google Cloud platform users third one is give the necessary but the least possible privileges so it is a good practice in general to only give the minimum necessary privileges to all of your users if all of the previously discussed account protection methods fail your
306:30 - 307:00 attackers will still have fewer services to break into a steal information form the actual implementation of this principle will vary based on your usage patterns for example if your database administrators only need to do Google cloudsql Administration task don't give them project editor role give them a cloudsql admin role instead also what you can do is you can set up quotas like default quas are set for every newly created project on Google Cloud platform this is a least Resort security control to avoid unexpected runaway spending for
307:00 - 307:30 example if you have a faulty script creating resources in a recursive manner it will only be able to create them up to the quota limits it can also protect against compromised account creating a lot of new resources for the attacker's purposes the quotas can be changed on the kotas page but it requires the service usage. cot. update permission which is only included in the following predefined roles like owner editor and Cota administrator so if a compromised
307:30 - 308:00 account or faulty script does not have have the permission then the spending can only increase up to the quota limits last thing is check and rate service account keys there is another type of account on Google Cloud platform besides the user accounts that is service accounts so service accounts are meant for programmatic use cases they cannot be used to access the Google Cloud console because they are only valid for Google Cloud API access the most frequent use case is to run applications or instances inheriting the rights of a
308:00 - 308:30 specific service account so they can access other cloud service without extra authentication Service account use keys for authentication one service account can have multiple Keys associated with it it is a good practice to regularly rotate the keys of the service account this can be achieved by creating a new key for the service account then overwriting the current key with the new one everywhere it was saved and then deleting the old key associated with the service account this way even if an
308:30 - 309:00 application where the key was stored is compromised without your knowledge the attacker will only have a limited time window to use the key now that you have a theoretical understanding of Google Cloud identity and access management working and Concepts let's now see a practical demonstration of it on Google Cloud platform so we are at Google Cloud console now this is how the dashboard of Google Cloud platform looks like means console looks like you can see the id. panda. so what we are going to do is we
309:00 - 309:30 are going to see the identity ACC management so let's move on to I am and just click here and go to am and admin and here you can see all the permissions the accounts have so we are going to provide a new access to the like you can also see the project here it's demo billing so from this project I will be giving the access so let's add account so you can add it from here I'm going to add U like the other account of mine in it so yeah it will be
309:30 - 310:00 at project level you're going to give a role now so select it go to Project give editor as I have explained you how we have to give the editor role here editor now you have to give another role at resource level that's how the that's what is given in the policy right I have explained you the policy of access management choose a resource for it so we going to choose cloud storage we can choose so cloud storage where it
310:00 - 310:30 is yeah so we are just going to choose the viewer option okay so yeah save it but at the project level we have the editor option remember that okay so yeah it's been added now now you can just go to like because you have the the editor permissions at the project label so you can just simply see how like we can go to Virtual Machine also so here you can go and you can see if there is any virtual machine is there oh just a second right now I'm logging in my the
310:30 - 311:00 same account so what I going to do is I I can go to another account that is which I've given permission to right so I'll go to this account sorry so now I'm logging to my other account that is p 2595 gmail.com you can see now so in this account I have been to the VM instance now so in this you can see like there's no VM created now but I can create an instance from here and similarly if you go to cloud storage you can access other services also through this account because you have the editor option at project lbel
311:00 - 311:30 okay just go to cloud storage and you can see there is this demo store cloud is here so you can even create a new bucket or you can just go to this one and you can upload any file into it also like I'll show you how upload files say Okay can just upload this so you can see how upload has been done because you have all the rights of editor okay you can delete it also so just select this one so you can just delete it from here yeah some and it will get deleted
311:30 - 312:00 now go back you can even delete this one also you can just select it and you can delete it from here okay now let's uh move back to our own I am permissions this is the main account I'm opening you can see here from here I will go to the I am roles and everything so I am access management means I entity and access management and uh what I can do here is I can edit the roles so what I can do is I can delete it from here right okay I will not delete it at project level no
312:00 - 312:30 no I will delete it the editor okay but the resource level it will remain but it won't be completely be deleted okay so I'm just deleting this one so just select here R roll one and save okay now if you refresh it you will see like you won't be having access to it now the we have deleted the editor role what we can do is we can choose the other account okay access to the secondary account and
312:30 - 313:00 then we will go to to storage sorry just a second cloud storage so you can see the permissions are been limited so you don't have sufficient permissions to view this even though the other way to view this is because you have the viewing option and don't not at the project level but at the resource level so you can view it from the cloud shell there are certain codes being given the documentation part of entty and access management in Google Cloud you can see the codes and you can
313:00 - 313:30 just type it here at the command line interface of the Google Cloud that is cloud shell which is this and you can activate it from here and you can just access means you can view the files from here okay but not at the console you can only be able to S it from command line interface that is cloud share now let's move back to the this my main account now I can even delete these roles like the roles I provided now so I can delete this role completely okay so I will just remove it from here confirm okay so that's how the role is
313:30 - 314:00 been deleted I hope you have understand how identity and access management in Google Cloud works now let's take a look at some building blocks of AI in Google Cloud so there are four major building blocks first one is site second is language then conversation and structure data so under site we have multiple products that is first one is Vision API with vision API and machine learning model which helps you in Reading images
314:00 - 314:30 suppose if I upload an image under Vision API by just reading that image the Google Cloud platform can help you identify the data points which are related to the particular image so it's very efficient for data processing I mean image data processing now the next one that is cloud video intelligence which has like pre-trained machine learning models that automatically recognize a vast number of objects places and actions in storing and streaming video which offers like exceptional quality out of the box also it's highly efficient for common use cases and improves over time as new
314:30 - 315:00 Concepts are introduced now the next is automl Vision which enables you to train machine learning models to classify your images according to your own defined labels then autom ml video intelligence which has a graphical interface that makes it easy to train your own custom models to classify and track objects within videos even if you have like minimal machine learning experience also then also it can work very efficiently it's ideal for projects that require custom labels that are aren't covered by the pre-t trained video intelligence API
315:00 - 315:30 second category is language so under language we have the first product is a cloud translate which like helps you in convert one particular language to another okay so suppose you have something written in Japanese you can feed that in Cloud translate and it can get converted into English or like any other language of a choice so the next one is like Cloud natural language which is like a very famous product to for doing sentiment analysis okay then we have with automated translation like Auto translation so with automated translation developers translators and
315:30 - 316:00 localization experts with limited machine learning expertise can quickly create high quality and production leing models so just upload translated image payers and uh like automl translation will train a custom model that you can scale and adapt to meet your domain specific needs last is automl natural language which enables you to build and deploy custom machine learning models that analyze documents and like categorize them identify entities with them or assessing attitudes with them now the third main category is conversation so under conversation we
316:00 - 316:30 have the first product as dialog flow which is a natural language understanding platform that makes it easy to design and uh integrate a conversational user interface into your mobile app web application or device B interactive voice response system and so on so using dialog flow you can provide new and engaging ways for users to interact with your product and dialog flow can analyze multiple types of input from your customers including text or audio inputs like from a phone or voice record so it can also respond to your
316:30 - 317:00 customers in a couple of ways either either through text or speech then we have speech to text API in under conversation which helps in converting your speech so suppose someone is speaking and at the same time you want that particular speech to be converted into text then in that case this particular API and machine learning model comes handy and it helps you do that analysis similarly we have text to speech API which helps in converting given text to the speech at the same time now comes to the last building block that is structure data so first product in the structure data is auto tables which enables your entire team of
317:00 - 317:30 data scientists analysts and developers to automatically build and deploy state-of-the-art machine learning models on structured data at massively increased speed and scale which in turn transform your business by leveraging your Enterprise data to tackle Mission critical tasks like Supply Chain management fraud detection lead conversion optimization and increasing customer lifetime value now coming to Cloud inference API so if you see time series analysis essential for day-to-day operation of many companies most most
317:30 - 318:00 popular use cases including analyzing foot traffic conversion for retailers detecting data anomalies identifying correlations in real time over sensor data or generating high quality recommendations so with Cloud inference API you can gather insights in real time from your typ to time series data sets so moving on to the last product that is a recommendations AI which draws on the experience and expertise in machine learning to deliver personalized recommendations that suit each customer's taste and preferences across all your touch points now finally let's
318:00 - 318:30 look at uh some of the solutions of AI provided by GCB the first one is contact center AI with this you can lower cost and increase customer satisfaction with the best of Google AI technology so customer service is been improved with AI that understands interacts and talks so with Contex Center AI you can create agents that are superheroes of your customers also you can enable natural interactions with virtual agents and Empower your teams with actionable insights moving on to the next one that is AI platform UniFi so AI platform
318:30 - 319:00 UniFi brings Auto and AI platform classic together into a unified API client library and user interface with AI platform unified both automl training and customer training are available options so whichever option you choose for training you can save models deploying models and requests predictions with AI platform unified now the last major solution that is document AI with document AI you can automate data capture to scale to reduce document processing cost it is built on Decades of AI Innovation at Google bringing powerful and useful solutions to the
319:00 - 319:30 challenges under the hood are Google's industry leading Technologies like computer vision and natural language processing that create pre-train models for high volume documents document AI has already like processed 10 billions of tens or billions of pages of documents across Landing Insurance government and other Industries so now let's have a better understanding of Google Cloud a platform by trying our hands on it if you don't have an account on Google Cloud platform then create one gcp is like a really good platform to have your account on it will ask for
319:30 - 320:00 your like credit and debit card details just for the verification purpose and you will get a free trial for 90 days with $300 credited in gcp account and you can use that for the like demo I'm showing you know for the exercises I'm showing you practice with that so this is the dashboard of Google CL platform and uh we have to go for AI platform so you can just type here AI platform so basically AI platform is used to train your machine learning model and then scale your machine learning model and then you can like take your train model
320:00 - 320:30 and deploy it on your platform this is the dashboard where you can if you upload your model or anything so it will show all the predictions for it also the it will predict traffic on it and everything error rate will also show prediction latency it will also show these kind of things are there right now there's no model uploaded so it's just showing the basic headings for it so the first one the first feature in this one is you can say of you can come to node notebooks it is how to generate a notebook instance just like you have jupit notebook right in that way on you
320:30 - 321:00 can just go to new instance here and uh you can go to tens flow these kind of options are there like you can upload your simple python notebook also which have like Cy learn or pandas or NP anything simple visualization or things are done data analysis is done under that you can upload or you can go for the happy heavy notebooks also like tensorflow or for py torch and other like Gaggle bit like this ID and GLE python also you can use different kind of GPU in that and here we are going
321:00 - 321:30 with penslow like Enterprise with 2.1 with like without GPU and you also have option of Nvidia Tesla okay so without GPU NV or Nvidia Tesla you can go for but right now we going with without gpus so here you can create your instance name and everything or you can also go to the Advance options here you can choose the default one have like four virtual CPUs and the standard one I mean standard one has four virtual CPUs 15gb RAM and if you like want more you can select more RAM and or if you want the less one you can select for the less one and then you can
321:30 - 322:00 also select for GPU whichever you want okay like Tesla K8 p00 p00 see where like various options are there okay and you can just and then create it okay it takes time so what I did is I already created one so we can just go back and launch from here can just open Jupiter notebook here so naturally it comes with all the buil-in packages of tensor flow and everything but if you want to like install some specific package you can just go to terminal here you can type
322:00 - 322:30 like B install XG boost you can install like okay so it's downloading install that okay we have this multiple options here launching a notebook or a simple python console or everything so we can just go to notebook right now we can select or there are like multiple languages you can select okay you can also python to other languages also right now I've chosen python so we can check the version of XG boost we have installed
322:30 - 323:00 okay so we can type just uh we can import XG boost as xgb then xgb for checking the version xgb doore version Dore okay then we can also launch like simple python console also so what you can do here is it's come as a simple python so what you can do you can go for like checking the tlow version also so just import tlow
323:00 - 323:30 as DF and then for checking the version same DF doore version so this is the 2.1.3 version of tens flow so this is how you can play with your jup with this not just like you play with a manual jupyter notebook right so yeah now coming to the next service that is aiub so what aiub generally does is if there's like Google Cloud AI team is there and Google Cloud team now where they just post the content here the latest contents or
323:30 - 324:00 whatever kind of cont content whichever is useful for you you can take that and you can like if you want to understand any model you can get it from here or if you want to run any RNN or something you can understand and get a research content from here so that's just one thing second thing AI Hub does is if you working in an organization they have separate teams or something you can share your model here and it will be you can put the limitation of it it will completely be private to your team or certain to multiple teams if you want to share your model with that will be completely private so that's a very good thing with aiub okay you can get your
324:00 - 324:30 model uploaded here and share it with the people you want to with your customizations okay so that is aiub then we have data labeling so in data labeling what happens is you put your data set beat of image or videos in whichever manner it gets annotated by the human annotators so you can just create it here uh you can insert your data set whichever like image video text or you can create and then you can give your parameters or descriptions whichever way you want your data to get annotated so you can give a better
324:30 - 325:00 description it will give you a better results on the basis of that the more your description is accurate the more accurate will be the annotations okay so that's how it can be done then we can go back to the other service that is you can go to jobs the jobs here is you can like create a new training job from here we have option of buildin algorithm training then a custom code training like custom code training you can write your model create your model and you can upload it here and whichever way you want to upload or you want to give any kind of model that is custom code but
325:00 - 325:30 it's Plus point is in buil-in algorithm training with buil-in algorithm training we get the option of like you can select any kind of algorithm these opt all the algorithms are given XG boost distributed XG boost linear learner wide and deep learner image classification image object detection these are the new ones which Google Cloud introduced like Tabet and ncf all these things you choose your algorithm then you can give your training data you can give and then you can give your algorithm arguments also and you can provide the split ratio also also you can give other parameters
325:30 - 326:00 also on the basis of that parameter the model will be analyzed and prepared so that's a very plus Point as it like saves a lot of time instead of making the whole model it just directly creates a model for you you don't have to code and do everything so that's a very plus point and then we have pipelines so what pipeline does is pipeline addresses all your mlops life cycle starting from acquiring data preparing data analyzing data and training the model analyzing the model deploying the model and then tracking your like model artifacts
326:00 - 326:30 elting the model it is like built on CU flow pipelines and tensorflow extended modules so it basically allows you to build an end to endend Pipeline and at the same time deploying the end to end pipeline so this is what the pipeline a platform pipeline is so then we have like models models is something if you want to do codewise like command line programming you have to do like command line model you have to make or analyze it or deploy it that's where model is used we can discuss that later that's actually a big process so yeah so I just wanted to show you the demo thing for
326:30 - 327:00 this so this is how the Google Cloud AR platform works so this is the general description for [Music] it so let's start with having a brief overview of a resource in Google Cloud so the first question that arise while having an overview of a resource in Google cloud is what is actually a resource in the context of Google Cloud a resource can refer to the service level resources that are used to process your workloads virtual machines
327:00 - 327:30 databases and so on as well as to the count level resources that sit above the services such as projects folders and the organization now the next thing we need to understand under resource overview is what is Resource Management so resource management is focused on how you should configure and Grant access to the various Cloud resources for your company or team specifically the setup and organization of the account level resources that sit above the service level resources account level resources are the resources involved in setting up
327:30 - 328:00 and administrating your Google Cloud account now the third and major thing is resource hierarchy Google Cloud resources are organized hierarchically this hierarchy allows you to map your organization's operational structure to Google cloud and to manage access control and permissions for groups of related resources the resource hierarchy provides logical attach points for access management policies like identity and access management and organizations policies also both identity and access
328:00 - 328:30 management and organization policies are inherited through the hierarchy and the effective policy at each node of the hierarchy is the result of policies directly applied to the node and policies inherited from its ancestors so this following diagram shows an example of resource hierarchy illustrating the core account level resources involved in administrating your Google Cloud account so in this you can see first is like domain so your company domain is the primary identity of your organization that establishes your company's identity
328:30 - 329:00 with Google services including Google Cloud to manage the users in your organization at a domain level you define which users should be associated with your organization when using Google Cloud domain is also where you can universally administer policy for your users and devices for example enable two Factor authentication reset passwords for any users in your organization the domain is linked to either a Google workspace or Cloud identity account the Google workspace or Cloud identity account is associated with exactly one
329:00 - 329:30 organization you manage the domain level functionality using the Google admin console from like admin. goole.com then the second thing is organization an organization is the root note of the Google Cloud hierarchy of resources all Google Cloud resources that belong to an organization are grouped under the organization node allowing you to Define settings permissions and policies for all projects folders resources and Cloud billing account at parents an organization is associated with with
329:30 - 330:00 exactly one domain established with either a Google workspace or Cloud identity account and is created automatically when you set up your domain in Google Cloud using an organization you can centrally manage your Google Cloud resources and your users access to those resources this includes ProActive Management and reactive management so ProActive Management is to reorganize resources as needed for example like restructuring or spinning up of a new division may require new projects and cod ERS and
330:00 - 330:30 reactive management is an organization resource that provides a safety net to regain access to Lost resources for example if one of your team members loses their access or leaves the company the various roles and resources that are related to Google Cloud including the organization projects folders resources and Cloud billing accounts are managed within the Google Cloud console now the third thing is folder so folders are a grouping mechanism and can contain projects other folders or a combination of both to use folders you must have an
330:30 - 331:00 organization node folders and projects are all mapped under the organization node folders can be used by group resources that share common identity and access management policies while folder can contain multiple folders or resources a given folder or resource can have exactly one parent and the fourth one is projects so projects are required to use service label resources such as computer engine virtual machines pop sub topics cloud storage buckets and so on all service level resources are parented
331:00 - 331:30 by projects the Bas level organizing entity in Google Cloud you can use projects to represent logical projects teams environments or other collections that map to a business function or structure projects form the basis for enabling services apis and I am permissions any given resource can only exist in one project and then we have resources so Google cloud service level resources are the fundamental components that make up all Google cloud services such as Compu comp engine virtual
331:30 - 332:00 machines pup subtopics cloud storage buckets and so on for building and access control purposes resources exist at the lowest level of a hierarchy that also includes projects and an organization and the last one is label so labels help you categorize your Google Cloud resources such as computer engine instances a label is a key value pair you can attach labels to each resource then filter the resources based on their labels labels are great for cost tracking at a granular level
332:00 - 332:30 information about labels is forwarded to the billing system so you can analyze your charges by label now let's move on to understand Cloud billing account and Google's payment profile so we will understand them simultaneously okay so a cloud billing account is set up in Google cloud and is used to define who pays for given set of Google Cloud resources and Google Map platform apis access control to a cloud billing account is established by identity and access management rules then a cloud building account is connected to a Google payments profile your Google
332:30 - 333:00 payments profile includes a payment instrument to which costs are charged so a cloud building account is a cloud level resource managed in the cloud console whereas Google payments profile is a Google level resource managed at payments. google.com a cloud building account like tracks all the costs incurred by your Google Cloud usage a cloud building account can be linked to one or more projects and projects usage is charged to the linked Cloud building account okay whereas uh Google payments profile connects to all of your Google
333:00 - 333:30 services such as Google ads Google cloud and Fone Services a cloud building account results in a single invoice per Cloud billing account okay and a Google payments profile process payments for all Google services not just Google Cloud a cloud billing account operates in a single currency and defines who pays for a given set of resources a cloud building account is connected to a Google payments profile which includes a payment illustrating defining how you pay for your charges Google payments profile on the other side it stores
333:30 - 334:00 information like name address and text ID when required legally and who is responsible for the profile also it stores your various payment instruments like credit cards debit card bank accounts and other payment methods you have used to buy through Google in the past also Google payments profile functions as a document Center where you can view invoices payment history and so on cloud billing account has like billing specific roles and permissions to control accesses and modifying billing related functions established by inty and access management rols whereas
334:00 - 334:30 Google payments profile controls who can view and receive invoices for your various Cloud building accounts and products now let's understand the types of cloud bilding accounts and profile so there are two types of cloud bilding accounts first is selfer account so in which the payment instrument is a credit or debit card or AC direct debit depending on availability in each country or region and in that the costs are charged automatically to the payment instrument connected to the cloud billing account you can sign sign up for
334:30 - 335:00 S Ser accounts online the documents generated for self serve accounts include statements payment receipts and tax invoices and are accessible in the cloud console in the second one is invoiced account or you can say offline account in this the payment instruments can be checked or via transfer invoices are sent by mail or electronically invoices are also accessible in the cloud console as our payment receipts you must be eligible for invoiced billing so for that learn more about invoice billing eligibility you can
335:00 - 335:30 learn that from the documentation of a GCB billing on Google Cloud platform documentation okay now coming to the types of Google payments profile when you create your payments profile you will be asked to specify the profile types this information must be accurate for text and identity verification this setting can't be changed when you are setting up your payments profile make sure to choose the type that best FS how you plan to use your profile there are two types of payment profile one is individual and another is business so in in individual you are using your account
335:30 - 336:00 for your own personal payments if you register your payments profile as an individual then only you can manage the profile you won't be able to add or remove users or change permissions on the profile and in Business Payment profile you are paying on behalf of a business organization or partnership or educational institution you use Google payments center to play apps and games and Google services like Google ads Google cloud and 5one services a business profile allows you to add other users to the Google payments profile you manage so so that more than one person
336:00 - 336:30 can access or manage a payments profile all users added to a business profile can see the payment information on the profile now let's understand the charging cycle in Billing context in gcp Billing criteria okay so first understand the charging cycle so the charging cycle on your Cloud billing account determines how and when you pay for your Google cloud services and your use of Google Maps platform apis for self Ser Cloud building accounts your Google Cloud costs are charged automatically in one or the two ways you
336:30 - 337:00 can see of like monthly billing which where costs are charged on a regular monthly cycle and the second is a threshold billing in which the cost are charge when your account has acquired a specific amount for sales or Cloud building accounts your charging cycle is automatically assigned when you create the account you do not get to choose your charging cycle and you cannot change the charging cycle for invoice Cloud billing accounts you typically receive one invoice per month and the amount of time you have to pay your invoice means your payment terms is
337:00 - 337:30 determined by the agreement you made with the Google now let's understand the billing contacts a cloud billing account includes one or more contacts that are defined on the Google payments profile that is connected to the cloud billing account these contacts are people who are designated to receive billing information is specific to the payments instrument on file for example When a credit card needs to be updated to access and uh manage this list of contacts you can use the payments cons cons or you can use the cloud console
337:30 - 338:00 Now understand the sub accounts under Cloud billing accounts so sub accounts are intended for resellers if you are a reseller you can use sub accounts to represent your customers charges for the purpose of chargebacks cloud billing sub accounts allow you to group charges from projects together on a separate section of your invoice a billing sub account is a cloud billing account that is owned by a reseller's parent Cloud billing account the usage charges for all billing sub accounts are paid for by the
338:00 - 338:30 resellers parent Cloud billing account note that parent Cloud billing account must be an invoice billing a sub account behaves like a cloud building account in most ways it can have projects linked to it Cloud building data exports can be configured on it and it can have identity and access management roles defined on it any charges made to projects linked to the sub account or grouped and subtotal on the invoice and the effect on resource management is that access control policy can be
338:30 - 339:00 entirely segregated on the sub account to allow for customer separation and management the cloud building account API provides the ability to create and manage sub accounts use the API to connect to your existing systems and provision new customers or charge back groups programmatically now let's understand the relationships between organizations projects and Cloud building accounts and payments profile there are two types of relationships that govern the interactions between organization Cloud billing accounts and projects that is ownership and payment linkage so ownership refers to an
339:00 - 339:30 identity and access management permission inheritance and payment linkages defined which Cloud billing account pays for a given project ownership of a cloud billing account is limited to a single organization payment linkage of a project linked to a cloud billing account is not limited by organization ownership it is possible for a cloud billing account to pay for projects that belong to an organization that is different from the organization that owns the cloud billing account the following diagram you can see shows the relationship ship of ownership and
339:30 - 340:00 payment linkages for a sample organization in the diagram the organization has ownership over projects 1 2 and three meaning that it is the I am permissions parent of the three projects the cloud bilding account is linked to project 1 2 and three meaning it pays for cost incurred by the three projects note that although you can link Cloud building accounts to projects Cloud bilding accounts are not parents of projects in an identity and exage management sense and therefore projects don't inherit permissions from the cloud
340:00 - 340:30 bilding account they are linked to the cloud bilding account is also linked to a Google payments profile which stores information like name address and payment methods in this example many users who are granted identity and access management billing roles in the organization also have those roles on the cloud billing account or the projects now that you have a theoretical understanding of billing criteria in Google Cloud platform let's now see how it's been done practically by trying our hands on it on Google Cloud platform you can just directly go to the Google Cloud
340:30 - 341:00 console so I already have my account login here if you don't have an account on Google platform create one it's a very good platform to have your account on you will just ask for a little bit of information including debit or credit card details and uh it will just deduct uh a very like small amount of money one or two rupees that's it and that will also be refunded soon and uh what you will get is by creating a Google
341:00 - 341:30 platform account you will get 90 days free trial in which you will get $300 of free credit and you can use that for certain demos of like big Perry or big table or you can say of compute engine virtual machine instances launching any kind of service you can use using those free credits okay so this is the project I can show you like you can go here and create a project okay or you can like I have like certain projects here but I will create one more new project okay so that I can show you how billing can be
341:30 - 342:00 linked to this account this project so I will just make the project is getting created you can see the project name okay this is the older one so we will just open the new one demo billing yeah so now you can see the project name here project ID is been shown and project number everything is here but right now the billing account is not been linked to it so we can just go here and open billing from here
342:00 - 342:30 so we can manage from here what you can do is you can create a new billing account also like for this we can say right now it's because it's been linked because there's a limit of three like if you making projects so only three accounts can be linked to a billing account and right now before making this project I only had one account linked to it so I just created another and it got automatically linked to it okay so it just got linked and means it got it linked by itself but uh you can like if it don't get link you can if you open
342:30 - 343:00 this from like billing section when you open now from there only you can link the account like manage billing accounts is there you can like go here and if you don't have a billing account you can create one billing account from here and then the account will be appropriated to it the billing account will be attached to your project okay so we can see like there is one active billing account and if we close the active then there are two account this is the free tire account actually my free tire is over so it's closed now that is my billing account that is the free tire account right now we accessing is my billing
343:00 - 343:30 account one okay so that's how it's get linked and uh then what you can see here like it's showing the billing account my billing account one we are accessing right now now what we can do is we can set the budgets and alerts for it so we can just go to budgets and alerts right now the budget is being created for that we can delete this one this is the older one so we will just delete this one and we can create budget so all you have to do is give a name so we can say like custom okay and uh you can like select for
343:30 - 344:00 which you want to give like there are two are there two projects are there demo or demoing but we have to choose the demo building so we are just going to choose demo building or we can choose all of them also it's up to us and then we can go for all services you can choose a specific service also or you can choose all of them so right now I'm choosing all of them yeah and I've chosen all of them and then I'm just right now unchecking it just because we want to see the price like without our discounts and promotion applied to it so we can just go to next then then uh
344:00 - 344:30 there are like two options here you can see amount like if you want to give the last month spend whatever you have spent last month that will be applied here or you can just choose a specific amount so I will just choose a specific amount for it like I can give th000 okay and the maximum amount I'm going to choose and that's th000 now you can see there you can like put a threshold over it if the Google will notify you it will mail you or something so that you get the alert that 50% of
344:30 - 345:00 your budget is been spent or 90% of your budget is spent or 100% of your budget is spent okay and from like this is the amount on that basis only it's showing okay so we can also go for actual or forecasted so actual is like okay when it gets spent I mean okay and forecasted mean when it's it will forecast that in how much time or how much period of time it will get spent it will notify you right now we are choosing actual and you can like create another threshold if you want to like I can give at one % also so
345:00 - 345:30 1% that will be 10 after 10 Rupees will be spent it will notify us I mean that this much amount is being spent okay and you can like email alerts okay it will email so you can choose this so that you can get the email for threshold okay and you can just finish it so custom billing has been made and zero credits are been used out of thousand that has been done then what we can do is go for billing export how you want to get it export to your big query okay you can like export your billing whatever your billing amount and everything information regarding Bing that will be exported to
345:30 - 346:00 a big query it's enabled actually but it is enabled for demo right it's not enabled for our demo billing project so what we can do is we can yeah edit settings we want to make it for demo billing but right now we don't have a data set so create new data set we can do this is uh getting created in uh big query under B query only this data set is getting created so demo billore BQ we can get b so data location default we
346:00 - 346:30 can give that's not a problem so whichever location you want to choose and enable table expiration like it this is a very cool option like whatever data you are like getting saved in your table now under data set so for how many days should it retain so you can give the number of days you want to retain or if you don't want to delete it so so like just for a second you can understand like you can give here 10 days so if you give you 10 days after 10 days only the all the data under table will get deleted even the table would get deleted
346:30 - 347:00 so in that sense you can just give the days here or you can just remove it so that don't expires okay so we can just create data set okay must contain only letters okay so demo Bill BQ is okay can just create data set now so data set is been created demo BQ we can choose and then you can save it so bigquery export is been getting created and then we can like also enable it for pricing also we can just go to
347:00 - 347:30 edit settings and we can choose our project that is demo billing and demo B BQ so we have to enable the data transfer API in big query so so we can just enable it so we can just now go back to the billing and then we can go back to the billing export and now we can do this as the data transfer API is been activated now
347:30 - 348:00 we can go for demo building and we can choose the demo building IQ and save it okay so the pricing has been enabled for demo billing and also the daily cost detail is also been enabled and that will be saved in your bigquery data set under the table okay whatever table it's been getting created under that bigquery so that will be there you can see it in bigquery also like just go here
348:00 - 348:30 query you can see the just a second demo B IQ you can see data set so in this the table will be created right now it's not getting created but means when the data will be exported at that time the table will be created and the data will be saved here so this has been showing in the big okay we can go back here and now the last feature like this is the main thing that is cost table you come to cost table it will show you about the cost of your other sources you have used so for this like right now I haven't
348:30 - 349:00 made any resources in the in this billing account what we can do is we can go to the that I've told you like I have that free tire account so we can go to that account we can choose that account like if we can remove we can go to the older account that is closed one but we can choose this one so we can just go to my billing account this and now we can go to cost table so here I have all the invoices for it because I have used resources in this so we can choose this project okay we will open this project demo now all the cost and everything
349:00 - 349:30 which has been charged for this project it's just showing here cost credit and everything because this was a free tire account so all whatever in charge it been reimbursed at the same time okay so you can choose for like compute engine every resource has been elongated here so we can just minimize it recover it I mean so in demo we have a comput engine so in computer engine we can see like I have launched the instance N1 predefined instance score for maricas and everything however it is been given so
349:30 - 350:00 you can see the prices for it like the cost 4 to 97 the virtual machine instance which I have launched it is showing the cost 4297 but we can check it from pricing calculator also like how much does it cost open from pricing calculator from here so just open pricing calculator okay and we can just check for instance for computer engine instance like how much does it cost I might have ran that for around a
350:00 - 350:30 month actually 20 days or a month so can just engine and stenes we have seen so number of instance we can say like two instances but we can okay check for one only so yeah all these things are okay okay no problem that's machine type standard only might have I have chosen so had to estimate so estimate cost it's showing is $ 48.92 per month so that will come around around like 4,000 5,000 only so that's how I've been charged for it similarly if you can see like there's
350:30 - 351:00 N1 pref instance score and N1 pref instance score running in americaas everything like how it has been charged 163 other instance that might be then there's a ram running so for that also it been costed around 2147 that also you can check in pricing calculator on so instan okay it will come in instances only RAM and everything so because it's a instance 4 inst Ram so and then we can see for you to instance score how it costed 756 rupees for everything you can see like the cost is given for every resource similarly you can come for the
351:00 - 351:30 app engine also just a second what I can do is I can minimize it from here compute engine so now there is app engine you can see similarly for the prices for it the flex instance score hours Mumbai in Mumbai I have launched it so or Mumbai region so it's costed 2089 and flex instance Ram in Mumbai it costed 277 similarly you can see it for cloudsql also these are the prices and for those prices you can choose the uh like if you think it's really high so
351:30 - 352:00 you can check it for here like what's the average price for the resource you are going to launch okay for whatever you are like using the resource you are using so now you have understood like how it is costing and if it is not similar to the cost it has been showing in pricing calculator then you can redefine it and make it more efficient by understanding the cost and everything that's how the gcp billing prices [Music] works so first let's understand the best
352:00 - 352:30 practices under data management so first you have to ensure a total visibility of data without a holistic view of data and its resources it can be difficult to know what data you have where data originated from and what data is in the public domain that shouldn't be second to design data loss prevention policies in juit so data loss prevention in juit is a a set of policies processes and tools that are put in place to ensure your sensitive information won't be lost during a fire a natural disaster or
352:30 - 353:00 Breakin you never know when tragedy will strike that's why you should invest in prevention policies before it's too late third is have a logging policy in place so it is important to create a comprehensive logging policy within your Cloud platform to help with auditing and compliance access logging should be enabled on storage buckets so that you have an easily accessible log object access administrator audit logs are created by default but you should enable data access logs for data rights in all
353:00 - 353:30 services also use display names in data flow pipelines so always use a name field to assign a useful at a glance name to the transform this field value is reflected in the cloud data flow monitoring UI and can be incredibly useful to anyone looking at the pipeline it is often possible to identify performance issues without having to look at the code using only the monitoring UI and well named transforms then moving on to the second category that is cost optimization so one of the best practices for cost optimization is
353:30 - 354:00 to automate it like automating the task and Ru manual intervention automation is simplified using a label which is a key value pairer applied to various Google cloud services you can attach a label to each resource such as Compu instances then filter the resources based on their labels so the second best performance under cost optimization is using preemptable virtual machines as with most tradeoffs the biggest reason to use a preemptable virtual machine is cost so PMT virtual machines can save up to 80%
354:00 - 354:30 compared to a normal on demand virtual machine this is a huge savings if the workload you are trying to run consists of shli processes or things that are not urgent and can be done anytime so the third one is a purchase commitments the sustained usage discounts are a major differentiator for gcp they apply automatically once your instance is online for more than 25% of the monthly billing cycle and can net you a discount of up to 30% depending on instance type you can combine suain and committed use discounts but not at the same time
354:30 - 355:00 committed use can get you a discount of up to 57% for most instance types and up to 70% for memory optimized types fourth is utilize cost management tools that take action using third party tools for cloud optimization help with cost visibility and governance and cost optimization make sure you aren't just focusing on cost visibility and recommendations but find a tool that takes the extra strap and takes those actions for you this automation reduces the potential for human error and saves
355:00 - 355:30 organization time and money by allowing developers to reallocate their time to more beneficial tasks now the last best performance in the cost optimization is optimized performance and storage costs in the cloud where storage is built as a separate line item paying attention to storage utilization and configuration can result in substantial cost savings and storage needs like compute are always changing it's possible that the storage class you picked when you first set up your environment May no longer be appropriate for for a given backload moving on to the next category that is
355:30 - 356:00 networking so the first best performance under networking is use Virtual private Cloud to Define your network so use vpcs and subnets to map out map out your network and to group and isolate related sources virtual private cloud is a virtual session of a physical Network virtual private Cloud networks provide scalable and flexible networking for computer engine virtual machine instances and for the services that leverage virtual machine instances including Google kubernetes engine data proc and data flow among others VPC
356:00 - 356:30 networks are global resources a single VPC can span multiple regions without communicating over the public internet this means you can connect and manage resources distributed across the globe from a single Google Cloud project and you can create multiple isolated VPC networks in a single project VPC networks themselves do not Define IP addresses ranges instead each VPC Network consists of one or more partitions called sub networks each Subnet in turn defin finds one or more IP address ranges subnets are Regional
356:30 - 357:00 resources each subnet is explicitly associated with a single region then we have centralized like you have to centralize the network control so use shared VPC to connect to a common VPC network resources in those projects can communicate with each other security and efficiently across project boundaries using internal IPS you can manage shared network resources such as subnets roots and firewalls from Central host project enabling you to apply and en Force consistent Network policies across the
357:00 - 357:30 projects with shared VPC and IM am controls you can separate Network Administration from Project Administration this separation helps you implement the principle of lease privilege for example a centralized Network team can administer the network without having any permissions into the participating projects similarly the project ads can manage their project resources without any permissions to manipulate the shared network then connect to Enterprise Network so many Enterprises need to connect existing on premises infrastructure with their Google Cloud resources evaluate your
357:30 - 358:00 bandwidth latency and S requirements choose the best connection option if you need low latency highly available Enterprise grade connections that enable you to reliably transfer data between your on premises and VPC networks without traversing the internet connections to Google Cloud then use cloud interconnect and if you don't require the low latency and high availability of cloud interconnect or you are just starting on your Cloud Journey then use cloudvpn now moving on to the next category of best practices that is security so under this first one
358:00 - 358:30 is like apply lease privilege access controls or identity and access management the principle of lease privilege is a critical Foundation element in gcp security the principle is the concept of only providing employees with access to applications and resources they need to properly do their jobs second is manage unrestricted traffic and firewalls limit the IP ranges that you assign to each firewall only the networks that need access to those resources gcps advanced PPC features allow you to get any granula
358:30 - 359:00 with traffic by assigning targets by tag and service accounts this allows you to express traffic flows logically in a way that you can identify later such as allowing a front-end service to communicate to Virtual machines in the back and service service account and the third one is ensure your bucket names are unique across the whole platform it is recommended to append random characters to the bucket name and not include the company name in it this will make it harder for an attacker to locate buckets in a targeted attack fourth is
359:00 - 359:30 set up a Google Cloud organizational structure when you first log into your Google admin console everything will be grouped into a single organizational unit any settings you apply to this group will apply to all the users and devices in the organizations so planning out how you want to organize your units and hierarchy before diving in will help you save time and create a more structured security strategy moving to the next category compute engine region selection so the first one in this is when to choose your compute engine regions so early in the architecture
359:30 - 360:00 phase of an app decide how many and which compute engine regions to use your choice might affect your app for example architecture of your app might change if you synchronize some data between copies because the same users could connect through different regions at same time also like price differs by region and also process to move an app and its data between regions is combersome and sometimes costly so should be avoided once the app is live second is we need to see the factor to consider while selecting regions okay there are
360:00 - 360:30 multiple factors where you decide to deploy your Ro okay so first factor is latency however this is a complex problem because the user latency is affected by multiple aspects such as caching and load balancing mechanisms in Enterprise use cases latency to on premises systems or latency for a certain subset of users or Partners is more critical and the second Factor affecting is price so go Cloud resources if you see like their cost differ by region the resources available to estimate the prices are computer engine pricing pricing calculator Google Cloud skus billing API if you decide to deploy
360:30 - 361:00 multiple regions be aware that there are network charges for data syn between regions and the third Factor affecting is collocation with other Google cloud services so collocate your computer engine Resources with other Google cloud services wherever possible While most L sensitive services are available in every region some services are available only in specific locations fourth Factor affecting is a machine type availability not all CPU platforms and machine types are available in every region the availability of specific CPU platforms
361:00 - 361:30 or specific instance type differ by region and even Zone the fifth effector affecting is resource quotas your ability to deploy computer engine resources limited to Regional resources quotas so make sure that you request sufficient quota for the regions you plan to deploy in moving on to the third best practice that is evaluating latency requirements so latency is often the key consideration for your region selection because High user latency can lead to an inferior user experience you can affect some aspects of latency but some are outside of your control region selection
361:30 - 362:00 can only affect the latency to the computer engine region and not like entirety of the latency so the first one in this is last M latency the latency of the segment differs depending on the technology used to access the internet then the second one is Google front end and Edge pop latency these are like subcategories under evaluate latency requirements best practices so second subcategory I mean is Google frontend and Edge pop latency depending on your deployment model the latency to Google's Network Edge is also important this is
362:00 - 362:30 where Google load balancing products terminate TCP and SSL sessions and from which Cloud CDM delivers cachier results based on the content Ser many round trip might already end here because only part of the data needs to be retrived the whole way okay so moving on to the third subcategory that is computer engine region latency so in computer engine region latency the user request enters Google's Network at the edge poop the computer engine Reg is where Google cloud services or handling requests are
362:30 - 363:00 located this segment is the latency between the edop and computer engine region and it's so wholly within Google's Global Network so the fourth subcategory is app latency different apps have different latency requirements depending on the app users are more forgiving of latency issues apps that interact asynchronously or mobile apps with a high latency threshold 100 milliseconds or more can be deployed in a single region without degrading the user experience however for apps such such as Real Time games or a few milliseconds of latency can have a
363:00 - 363:30 greater effect on user experience deploy these types of apps in multiple regions close to the users now moving on to the next category that is AI platform training we have different best practices under AR platform training also so the first one is choose the right machine configuration for your training characteristics you can choose arbitrary machine types and various GPU types the machine configuration that you choose depends on your data size model size and algorithm selection for example deep learning Frameworks like Tor flow
363:30 - 364:00 and P torch benefit from GPU acceleration while Frameworks like Pyon XG boost don't on the other hand when you're training a large psychon model you need a memory optimized machine okay so the second one in this is don't use large machine for simple models simple models might not train faster with the gpus or with distributed training because they might not be able to benefit from increased Hardware parallelism because the psyit learn framework doesn't support distributed training make sure that you use only the scale Tire or custom machine type
364:00 - 364:30 configurations that correspond to a single worker instance and the third best performance is scale up before scaling out so scaling up instead of scaling out while experimenting can help you identifying the configurations that are performant and cost effective for example start by using a single worker that has a single GPU and then try a more powerful GPU before you use multiple gpus after that try distribute training as dis discussed later in the section scaling up is faster than
364:30 - 365:00 scaling out because Network latency is much slower than the GPU interconnect and the fourth best performance under AI platform training is for large data sets use distributed training so distributed training platforms data parallelism on a cluster of nodes to reduce the time required to train a tensorflow model when you use a large data set make sure that you adjust the number of iterations with respect to the distribution scales that is take the total number of iterations that are required and divide the total by number of gpus multiplied by the number of work noes I hope you
365:00 - 365:30 have understand all the best practices under Google Cloud [Music] platform now moving on to our first topic what exactly is Google Cloud certification well Google Cloud certification is a level of Google Cloud expertise that an individual obtains after passing one or more certification exam the certification validates your Cloud expertise and helps to Showcase your ability to help company and businesses with Google Cloud technology
365:30 - 366:00 now some of the reasons to get a Google Cloud certification would be you will be more confident about your Cloud skills according to the survey response from the 2020 Google Cloud certification impact report 87% of the Google Cloud certified individual are more confident about the cloud skills next the professional Cloud architect was the highest P certification of 2019 and also 2020 next more than one out of four Google Cloud certified individual to
366:00 - 366:30 call more responsibility and Leadership roles at their work now a company or an organization is more likely to work with an individual who is certified rather than a person who isn't this is because the certification acts as a proof that you have knowledge about Google cloud and you have worked on it before now this was about Google Cloud certification now let us move on to our next topic and see the types of certification there are three levels of Google certification first is is the foundational certification next is the associate level certification and then
366:30 - 367:00 comes the professional level certification now let us discuss about them one by one the fundamental level of certification validates broad knowledge of cloud Concepts and Google Cloud products Services tools features benefits and use cases to sum it up you should basically understand the capabilities of Google Cloud now in this level of certification there is only one certificate which is the cloud digital leader this certification is appropriate for individual in a non-technical job
367:00 - 367:30 role who wants to add value to the organization by gaining Cloud knowledge this certification is also for someone who has little or no hands-on experience working on the Google cloud in the foundational certification multiple choice and multiple select types of questions are asked you will have three hours to complete this examination and the registration fees for this examination is $99 you can write this examination in English you can either write it online or in a test Center New you the next level of certification is the associate
367:30 - 368:00 level this level of certification focuses on the fundamental skills of deploying monitoring and maintaining projects on Google cloud in this level of certification also there is only one certificate which is the cloud engineer now this certification is a good starting point for those who are new to cloud and can be used as a path to professional level certifications it requires at least 6 months of work experience working on the Google Cloud the types of question which are asked in the associate examinations are also multiple choice and multiple select but
368:00 - 368:30 you will have only two hours to complete this exam and the registration fees for this examination is $125 plus taxes you can write this examination in English Japanese or Spanish the last level of Google certification is the professional level this level of certification ranges across various key technical job function and accesses Advanced skills in design implementation and management the certific ification are recommended for individual with industry experience and
368:30 - 369:00 familiarity with Google Cloud products and solutions there are eight professional Cloud certification in Google which are the cloud architect Cloud developer data engineer data security engineer Cloud network engineer Cloud devops engineer collaboration engineer and machine learning engineer now this level of certification requires more than three years of Industry experience including more than one year of hands-on experience working on the Google Cloud the types of questions asked in this examination are again multiple choice and multiple selects but
369:00 - 369:30 you'll have two hours to complete this examination the registration fees for this examination is $200 plus taxes and you can write this examination in English now some of the professional certification exam can also be written in Japanese you can either write this examination online or in a test center near you now these were the types of certification now let us move on to the next topic and see some of the major role based certification we will start by knowing the fundamental level certification which is cloud digital
369:30 - 370:00 leader a cloud digital leader should have good understanding of Google Cloud Core products and services and how they benefit the organizations the cloud digital leaders should understand how the services can be used in real time to solve business problems and how Cloud solution support an Enterprise making it more efficient this is the only Google Cloud certification that does not require any previous Cloud experience nor requires hands-on experience with Google Cloud so if you're just starting your Cloud career and do not know where to begin while preparing for the
370:00 - 370:30 certification should be your first step the cloud digital leader exam accesses your knowledge in three areas the first is General Cloud knowledge second General Google Cloud knowledge and Google Cloud products and services the next certification we'll talk about is the associate level cloud engineer certification an associate Cloud engineer is expected to deploy application monitor operations and manage Enterprise solution an individual appearing for the certification should be able to use Google Cloud console and
370:30 - 371:00 command line interface to perform common platform based task to maintain one or more deployed solution that leverages Google managed or self-managed service on Google Cloud now for this associate level certification you will need more than 6 months of hands-on experience working with Google Cloud the associate Cloud engineer exam examines your ability to set up a Cloud solution environment plan and configure a Cloud solution deploy and Implement a Cloud solution and ensure successful operation
371:00 - 371:30 of a Cloud solution and also configure access and security the next certification we'll talk about is a professional level cloud architect certification the certification is intended for individual who are interested in designing and managing Business Solution using Google Cloud platform according to Google Knowledge Google Certified professional Cloud architect is the highest paying certification now the profession Cloud Architects should be able to use cloud Technologies to maximize the benefit for
371:30 - 372:00 their organization they should have a thorough understanding of cloud architect and Google Cloud platform they should be able to design develop and manage robust secure scalable highly available and Dynamic Solutions to drive business objectives for this certification you should have more than three years of Industry experience and also have one or more year experience architecting and managing solution using gcp the next certification is the professional Cloud developer this certification is intended for individual
372:00 - 372:30 who want to build an test application using Google cloud service a professional Cloud developer should be able to build scalable and highly available application using Google recommended practices and tools it should have hands-on experience with Cloud native application developer tools manage services and databases a professional Cloud developer should be skilled with at least one high level programming language and skilled at producing meaningful metrics in logs to debug and Trace code for this certification also it should have more
372:30 - 373:00 than 3 years of Industry experience and more than one year of experience designing and managing solution using Google cloud the rest of the professional certifications are the data engineer this certification is intended for individual who want to design and build data collecting and processing machine learning models on Google Cloud platform next we have Cloud devops engineer this certification is intended for individual who want to work as a devop engineer they should be efficient in both development and operation and
373:00 - 373:30 should have good knowledge of various devop tools they should build software delivery pipeline deploy and monitor Services the next certification is cloud security engineer this certification is intended for security engineer who have good understanding of security best practices and the current industry security requirements they have to design develop and manage the secure infrastructure using Google Security Services the next professional certification is a cloud network engineer the certification is intended for individuals who want to design
373:30 - 374:00 Implement and manage Network architecture on Google Cloud platform next is a collaboration engineer this certification is intended for individual who can understand an organizations mail routing and identify management infrastructure and be able to efficiently and securely establish communication and data access they should have at least one year of Google workspace Administration experience the next certification is the machine learning engineer this certification is
374:00 - 374:30 intended for individual who want to design build and product toize ml models to solve business challenges using Google Cloud Technologies now these were the professional level certification now we will move on to our final topic for today where I will give you a few tips to prepare for this certification the first point is read the exam guide for the certification now before you start practicing for the certification I would suggest you to go through the ex exam guide the exam guide contains domain and subdomains from which the questions are asked in the
374:30 - 375:00 examination this will give you a clear idea about what topics you should focus on in order to pass the examination you will find the exam guide in the Google Cloud certification official website next the most important step is hands-on experience if you're writing any certification examination except the cloud digital leader you should have at least 6 months of hands-on experience working on the Google Cloud but if you're just starting your career in Google cloud or want to start a career with Google Cloud I would highly
375:00 - 375:30 recommend you using the gcp free trial account now Google provides all its new customer with free trial which offers $300 in free credits now they do this because they want you to fully explore and conduct an assessment on Google Cloud platform you can use this $300 to try various Google Cloud products and learn how to use them you won't be charged unless you choose to upgrade and it will be valid for 90 days my next suggestion would be solving the sample questions the sample questions will
375:30 - 376:00 familiarize you with the format of the exam question and example content that may be covered on the exam now solving the sample questions will help you improve your confidence we can also refer Google white papers which will give you technical knowledge about various Google Cloud Concepts and services if you want to follow a structured approach then I would highly recommend you to opt for an online training certification I would highly recommend EDU certification training which is curated by top industry experts
376:00 - 376:30 the certification course consists of demonstration assignments mcqs and a certification projects which will help you master the [Music] concepts so first let's understand who is a Google Cloud architect a Google Cloud certified professional Cloud architect enables organization to leverage Google Cloud Technologies through an understanding of Google technology and Cloud architecture the individual designs develops and manages
376:30 - 377:00 robust secure scalable highly available and dyamic solutions to drive business objectives the cloud architect should be proficient in all aspects of Enterprise Cloud strategy solution design and Architectural best practices also the cloud architect should be experienced in software development methodologies and approaches including multi- tired distributed applications which span multicloud or hybrid environments now that you have understood who is a Google Cloud
377:00 - 377:30 architect let's understand why there's a need for Google Cloud architect first of all let's look at the market trends for Google Cloud platform if you see for the first pie chart in this you can see like the data is for quarter one of 2021 where we can see the top three cloud service providers like the first one you can see the ews for 32% market share and then Microsoft aure for 19% and Google Cloud for 7% and then there are various others and like a lot of cloud service providers which constitutes through 42%
377:30 - 378:00 of the market share now if you see AWS is actually the biggest cloud player because it was launched like way back in 2004 and after that Microsoft a and then after that cloud in 2010 was launched but if you see like initially it wasn't that much growing but if you see in the last three years if you see this second graph from quarter 1 2018 to the quarter 4 2020 you can see that AWS has 28% surge in market and then azour has 50%
378:00 - 378:30 where Alibaba cloud has 54% and the highest you can see is of Google Cloud which is 58% and uh Google cloud is that's how grabbing the market like aggressively it's because of its various services like ML and AI also and of cloud AI like various big query Services these are like the unique Services gole Cloud provide we will talk about it later on okay now let's talk about about the need of Google Cloud architect certification so the Google Cloud certification validate your expertise
378:30 - 379:00 and showcase your ability to transform with Google Cloud technology and it enables organization to leverage Google Cloud Technologies also like 87% of the Google Cloud certified individuals are more confident about their Cloud skills and professional Cloud architect was the highest spring certification of 2020 2019 more than one in four of Google Cloud certified individuals took on more responsibility or leadership roles at work also one of the major factor of Google Cloud architect certification it helps in getting the candidate short
379:00 - 379:30 listed in a company which is looking for a cloud architect so this first of all they what they see in a resume is whether you have a Google Cloud architect certification or not if you have it then you will get short listed for sure that's the major needs of a Google Cloud Arch certification now let's look at the job trends for Google CL architect so if you look at the it sector 86% of the Enterprises have more than a quarter of the it infrastructure running in Cloud environments and according to the study done by Gartner
379:30 - 380:00 the public cloud services Market will like hit about $31.20 and 52% of the Enterprises spend more than $1.2 million annually on public Cloud as well as 26% of the Enterprises spend more than $6 million which is like really huge now let's look at the salary or say pay scale for a Google Cloud architect so according to Global Knowledge the average pay for a Google Cloud architect in India is uh 15.3 lakhs whereas in us it is
380:00 - 380:30 $175,000 which is you can understand like a very high scale salary is been provided to Google Cloud architects now let's understand why Google Cloud platform is in great demand we all know how big the database of Gmail YouTube and Google search is and I don't think in recent years Google server has gone down it's actually one of the biggest in the world so it seems an obvious choice to trust them right so now let's look at uh what really gives Google Cloud an upper hand over other vendors so first
380:30 - 381:00 of all the major advantage Google cloud has is of Google Cloud's iot core which is a fully managed service to easily and securely connect manage and ingest data from globally dispersed devices also Google cloud has a better pricing than its competitors which means it cost effective third due to serverless Computing it is highly available and fall tolerant with Cloud air Cloud functions in uh gcp is the easiest way to run your code in the cloud integrated with machine learning Technologies also
381:00 - 381:30 it is highly skillable as it uses Autos scaling to automatically adjust the number of virtual machine instances that are hosting your application so what it does is it allows your application to adapt to different varying amounts of traffic lastly overcloud smart analytics Solutions are fully managed means Big Data Antics Solutions this multicloud analytics platform empowers everyone to get insights while eliminating constraints of scale performance and cost it uses realtime insights and data apps to drive decisions and Innovation
381:30 - 382:00 now let's uh look at the job description of a Google Cloud architect so you can see here the job description of a Google Cloud technical architect of Accenture company and the location you can see is gurugram harana so now let's understand what type of work will be required by the company and what skills will you performing on a regular basis so the work experience first of all is required four to six years a minimum of 4 years of overall handson experience in programming and data structures then strong Cloud expertise in delivering
382:00 - 382:30 Data Solutions especially like database services and big data services on the goog Cloud platform also goog Cloud architect should be experienced in integrating Cloud native Services Cloud native services are the like unique services for that particular cloud service provider means unique Services which are in Google Cloud platform uh the Google Cloud architect should know about it and should know how to work on it and how to execute and make an output through it so go Cloud architect should be like integrating Cloud L Services into secure efficient and scalable
382:30 - 383:00 Solutions also Google Cloud architect should know how to design data pipelines using Cloud native data Technologies means Cloud native data Technologies can be like storage accounts cloudsql big table big query data flow data Pro Etc also the Google Cloud architect should have a deep knowledge of rdbms as well as of nosql for like database programming and other non-technical or say professional attributes that the Google Cloud architect should carry his analytical thinking and attention to details put time management skills and the ability to work to hide to tight
383:00 - 383:30 deadlines and then capacity to work and de these all things are required by a Google Cloud architect you can see the emphasis as a main of Data Solutions Cloud native Services data pipeline Cloud native data Technologies also of rdbms and SQL now let's understand the roles and responsibilities of Google Cloud architect which he performs on a daily weekly or monthly basis so the Google Cloud architect should know how to design and deploy applications on Google Cloud platform which are dynamically scalable and available for
383:30 - 384:00 tolerant and also like it should be very much reliable and secure as well also if the Google Cloud architect is going to deploy an application then he should know what kind of Google cloud services should be used for it for that kind of a deployment based on the given requirements especially he should also know like how to migrate application also like multi- application say the databases and a heavy database also on Google Cloud platform one more thing is he should makes a strategy of how to work on Google Cloud platform with the most optimization and cost-saving
384:00 - 384:30 techniques and he should also know how to coordinate if there are new things are coming he should be easily adaptable to the changes okay also he should know how to implement multiple Google cloud services and use multiple products of Google Cloud platform in a way that it shouldn't cost much to the company or say organization that means he should be well aware with the cost optimization teching Google Cloud platform now let's understand the skills required by the Google Cloud architect so first of all
384:30 - 385:00 let's see the programming skills required by the Google Cloud architect so he should have an hands-on experience of SQL as well as nosql which are like majorly required for database programming and database migration and for quaring purposes he should also know the Python programming language maybe not to the expertise or in very advanced level but at least at an intermediate level so that he can create analyze and organize large chunks of data as well as uh it's very useful for using ML and AI
385:00 - 385:30 products like natural language processing and means natural language understanding and automl products for these purposes python is like very much required also for the application development purposes he should know the Java programming language also for web development purposes he should carry a good knowledge of HTML CSS and JavaScript now let's see the operating systems which the Google architect should be well aware of and should have have a good knowledge of like Linux Solaris youu Windows Unix but preferably
385:30 - 386:00 I would say he should have a good knowledge of Linux as it brings the features like open source security customization and is adopted by different Cloud platforms as well he should also have a very good knowledge of security and fundamentals in Google Cloud platform like in in access management like how to provide certain access management to certain users if an employee is working under him and if he's handling any team then he should know what kind of access he should provide to that person to that employee I mean and what are the roles and policies of identity and access
386:00 - 386:30 management also you should be well aware of cloud compliance techniques so Cloud compliance is like the principle that cloud deliver systems must be compliant with the standards their customers require and Cloud compliance ensures that Cloud Computing Services meet compliance requirements lastly he should be well aware of data privacy if he is uploading any data on cloud or if he is extracting any data from the cloud import export everything so he should be well aware of the data privacy and and how to protect and secure data from exploitation so Google Cloud architect
386:30 - 387:00 is also required to have a good knowledge of networking as well means networking in Google Cloud platform like you should have a very good knowledge of cloud CDN which is a cloud delivery Network for service web and like video content and also of cloud DNS which is a domain name system for Reliable and low latency name lookups as well as of hybrid connectivity options for VPN Beering and Enterprise needs on a whole Google Cloud architect should know how to design networking ways to make sure the network is responsible to user
387:00 - 387:30 demands by building automatic adjustment procedures lastly he should be very well aware of cloud storage techniques in Google Cloud platform and uh especially like usability and accessibility through cloud storage how easily cloud storage can be used and accessed secondly he should ensure the security for storage if he is storing any data he should be ensuring that how he can use security techniques to secure the data and whatever Services he is combining with
387:30 - 388:00 cloud storage it should be coste efficient as well as the the service he is using like Cloud St service if he's using he should be able to optimize it in a way that it should cost the lease to the organization as well as he should be using it that it's very portable or you can say like very easily usable so that will be convenient for the organizations and for the client also he should know how to automate the data he is storing cloud storage and the data should be synchronized with the multiple
388:00 - 388:30 devices or we can say with the lead device of the organization and also he should be well aware of the disaster recovery in cloud storage so that if the data gets lost then it can be backed up later on now when we come to data storage so there are like four types of storage so the first is a standard storage which is good for hot data that's accessed frequently including websites streaming videos and mobile apps second is nearline storage which is like for lowc cost good for data that
388:30 - 389:00 can be stored for at least 30 days including data backup and longtail multimedia content then there's cold Line storage which has a very low cost good for data that can be stored for at least 90 days including Disaster Recovery then there is archive storage so which has a lowest cost good for data that can be stored for at least 365 days including regulatory archives now these are the nonch Tech skills required by the Google Cloud architect that he should be flexible in working and he should have leadership skills as well to
389:00 - 389:30 lead a team and he should be business oriented that how to profit the organization if he is working on any project or something he should know that how to extract out the maximum leads or you can say the maximum profit for the business also he should have very good communication skills so that he can lead a team or he can connect multiple teams or you can easily handle the clients as well now let's understand how one can become a Google Cloud architect for becoming a Google Cloud architect you should know Cloud native Services means
389:30 - 390:00 which are the unique Services you can say like Firebase fire store app engine or Google kubernetes engine Cloud run you should have a good hands-on experience or say a very deep knowledge of these cloud inative services also you should be well aware of database programming like both SQL and nosql so that you can handle SQL databases as well as no SQL databases like there's a cloudsql and then there is a Cloud B table fire store also like memory store Cloud spanner all these are database
390:00 - 390:30 Services provided in Google Cloud platform also you should be well aware of Big Data Solutions so that big data services in Google Cloud platform can be easily managed like big quy and data PR or Cloud pops about data flow these all Big Data Solutions you should be well aware of also you should carry very fine analytical skills also you should be well aware of application development like you can use app engine in Google Cloud platform for developing the applications for deploying also for
390:30 - 391:00 deploying purposes also also you should know the automation process in Google Cloud platform like there is a free Google Cloud platform automation software from B my cloud so parkm cloud is a like tool for sheduling non- production instances on Google Cloud platform that helps companies reduce their cloud computing cost also like improve their it governance increase accountability and optimize their cloud computing resources as well also you should be well aware and have a good knowledge of DeVos products and Integrations so that you can build and
391:00 - 391:30 deploy new Cloud applications or store artifacts and monitor app security and reliability on Google cloud like there is a cloud build and artifact registry like Cloud build you can see like it defines custom workflows for building testing and deploying across multiple environments and artifact registry is used for storing managing and securing your container images and language packages all these things you should be well aware of also you you should be flexible enough to work in a fast space environment as well also you should be well aware of the compute Services as
391:30 - 392:00 well like there's compute engine where you can run virtual machines in Google's data center and also like there's a Google app engine which is serverless application platform for apps and backends also like cloud gpus and preemptable virtual machines and shielded virtual machines you should be well aware of and should also have like hands-on experience also other than that there are also like ML and AI services which is very unique in like Google cloud and Google cloud is actually famous for its ML and a services so that
392:00 - 392:30 is something you should be like you should have a deep knowledge and a hands on experience and you should really be well aware of it so you can see like there are certain services like automl Vision Ai and like there's Cloud translation for language detection translation there is a cloud natural language for sentiment analysis and video is also there for video classification recognition this is like for deep learning all these things this is something you should have an hands-on experience and you should know really really well this is what companies are targeting for then lastly you should
392:30 - 393:00 know how the Google kubernetes engine work and you should know about the architectural working of kubernetes engine so there are like two modes of Google kubernetes engine means there are two modes of operation Google kuties engine they like standard and autopilot and like there's also ping cluster Auto scaling in it and there is like pre-build kuber applications and templates are also there also like a container native networking and security is also being provided by GK sandbox means Google communties engine sandbox
393:00 - 393:30 so these are some of the concepts you should be well aware of other than that you should also have a very good communication skills to handle clients and handle teams and multiple teams as well now let's see the Google Cloud certification provided by edura as well so you can see how I have explained you a step by step like go Cloud certification training is designed to meet the industry Benchmark it's of course the industry relevant course and then there there are multiple batches for it right now there July 31st batch which is filling fast then is also like August 28th batch there are like monthly
393:30 - 394:00 batches you can register for them as I have like explained you step by step how Google Cloud certification is needed and why you need to be certified Google Cloud architect you can see I explained you compute services for like virtual networks all these things and compute services and security and identity fundamentals I've explained you like the identity access management and Cloud compliance and data privacy things other than that data storage services and these are like the fundamentals like these Services you should actually be knowing and how where you should Implement these services for what
394:00 - 394:30 requirement this is how the industry relevent Google Cloud platform architect course is been constructed bya and it can surely help you in also cracking the Google platform exam that we are going to talk about then uh there you can see like I have explained you devops automation everything I have explained you so as I have told you how these services are very much important and these are the services you should Target for so you can register for Eda Google Cloud platform architect certification so this was all about how
394:30 - 395:00 to become a Google Cloud architect now let's understand how to crack Google Cloud architect exam if you don't know much about Google Cloud architect exam it is a valid for 2 years and it's a 2hour exam when you have prepared for it you can actually apply for it that you can look at the documentation part of a Google Cloud platform for cracking Google Cloud architect exam you should know how to design and plan A goog Cloud solution architecture like designing a solution infrastructure that meets business requirements and consideration
395:00 - 395:30 for that includes business use cases and product strategy cost optimization supporting the application design integration with external systems movement of data as well as like design decision tradeoffs and build buy modify or depreciate also the compliance and observability as well other than that designing a solution infrastructure that meets technical requirements like high availability and failover Design scalability to meet growth requirements and performance and latency and while designing in network storage and compute Services you should consider integration
395:30 - 396:00 with on premises or multic Cloud environments choosing data processing Technologies as well and choosing compute resources and mapping compute needs to platform products then also like creating a migration plan that is documents and Architectural diagrams for which the considerations include integrating Solutions with existing systems migrating systems and data to support the solution also for software license mapping and network planning also like demanding management planning
396:00 - 396:30 and envisioning espacially like envisioning future solution improvements considerations for that that include cloud and Technology improvements and evolution of business needs and evangelism and advocacy second thing is you should know how to manage and provision a solution infrastructure so for that you need to configure Network topologies and you should consider extending to on premises environments and extending to a mult M Cloud environment that may include Google Cloud to Google Cloud communication so one more thing is there while
396:30 - 397:00 configuring Network topologies that is security protection also while managing and provisioning a solution infrastructure you should configure individual storage systems and you should consider data storage allocation for that and data processing as well as compute provisioning and security and access management as well as data growth planning also like you should configure compute systems as well while managing and provisioning and you should consider compute resource in for that and compute volatility configuration as well the third thing is you should know how to design for security and compliance while
397:00 - 397:30 designing for security consider inty and access management also like resource hierarchy and data security and separation of Duties and security controls like auditing VPC service controls Etc and also like managing customer managed encryption keys with Cloud Key Management Services and remote access as well and second thing under designing and security of compliance is designing for compliance lines for that consider legislation and Commercial as well as industry certifications and
397:30 - 398:00 audits as well like including logs so the fourth step is analyzing optimizing Technical and business process so while analyzing and defining process consider software development life cycle and continuous Integrations or continuous deployment as well as troubleshooting and testing and validation of software infrastructure other than that service catalog and provisioning and business continuity and Disaster Recovery as well second stepe is to analyze and Define business processes and for that you need to consider stakeholder management
398:00 - 398:30 change management team assessment decision making processes customer success management cost optimization Resource as well third thing under analyzing and optimizing technical business processes developing procedures to ensure reliability of Solutions in production like you can take for example chaos injuring and penetration testing fifth step is managing implementation so that can be done by advising development or operation teams to ensure successful deployment of the solutions and for that purpose you need to consider application development API best practices test
398:30 - 399:00 framework and data and system migration and management tooling so implementation can be managed also by interacting with the Google cloud programmatically and for that you need to consider Google Cloud shell Google Cloud SDK also like Cloud emulators means Cloud big table data store spanner Pub sub and fire stor as Etc so the last step is ensuring solution and operations reliability and for that you need to consider monitoring logging profiling editing Solutions as well as deployment and release management assisting with the support of
399:00 - 399:30 deployed Solutions and evaluating quality control [Music] measures now starting with our interview questions we are first going to see some general cloud computing and Google Cloud platform questions then questions on compute and hosting domains then storage and databases next networking followed by Big Data machine learning and Cloud artificial int intelligence so I guess it is clear what all we're going to talk about today so let's get started so the first question we're going to discuss is
399:30 - 400:00 what is cloud now there are various way to answer this one of them could be the cloud can be referred as Global Network of servers each with a unique function the servers are designed to store and manage data or to run various application or deliver content and have many more functionalities you can also mention that the servers are located in data centers across the world you can can also talk about the various service Cloud offers such as comput storage databases networking and so on now I
400:00 - 400:30 want you to understand the concepts and frame the answer in your own words moving on to our next question what is cloud computing now like the previous question even this question can be answered in many ways so to answer this question I would say cloud computing is an ond demand availability of computer system resources now this resources could include computing power storage databases and so on with cloud computing you don't have to buy or own or maintain physical data centers and servers you can just rent these resources whenever
400:30 - 401:00 you need them from a cloud service provider now here you can also mention what a cloud computing used for like it can be used for data backup Disaster Recovery virtual desktops software development and testing big data analytics and customer facing web applications so now moving on to a third question list the type of service model available in cloud computing so there are three types of service model available in cloud computing that is I pass and SAS IAS stands for infrastructure as a service in this
401:00 - 401:30 service model you can rent it infrastructure such as servers and virtual machines storage networks and operating system let's say a user wants to use a Linux machine he can access the Linux machine using I service model without worrying about the physical machine or the networking of the system on which the OS is installed the next service model is platform as a service this service model provides an ond demand environment for developing testing delivering and managing your software application the users don't
401:30 - 402:00 have to worry about setting up or managing the underlying infrastructure of servers storage networks and databases which are needed for development this is taken care of by the cloud service provider itself the next service model is software as a service in this service model the cloud providers leades application and software which are owned by them to its client the clients can access the software on any device which is connected to the internet using tools such as web browser or an application now to summarize this answer just think
402:00 - 402:30 of it as this way infrastructure as a service provides you with an infrastructure such as virtual machine or a servers whereas platform as a service provides you with a platform where you can develop test and run your application and software as a service provides you with the software itself now I guess you have some idea about service models which are available in cloud computing let us move on to the next question then the next question is list the types of cloud deployment model so there are four types of cloud deployment model which are public Cloud
402:30 - 403:00 private Cloud hybrid cloud and Community cloud in public Cloud deployment model the resources such as application and storage are available to General Public over the Internet these resources can be free or sold on demand which will allow users to pay only per usage for the CPU Cycles storage or bandwidth they use now when we talk about private Cloud it is operated so only for a single organization it offers Services over a private internet Network which is typically hosted on premises now this
403:00 - 403:30 private cloud is costlier but there is high level of security next the hybrid Cloud deployment model can be defined as a combination of public and private Cloud it can share Services between public and private clouds depending on the purpose the fourth Cloud deployment model is a community Cloud now the community Cloud infrastructure is shared between several organization from a specific specific Community with a common concern educational University that are cooperating in the same area of Interest as that of a research institute
403:30 - 404:00 can use the same Community Cloud now these were the four types of cloud deployment model so now let us move on to the next question and see what are the benefits of cloud computing or why companies are increasingly adopting cloud computing so now here you can mention some of the benefits of cloud computing such as there is reduced cost of managing and maintaining it system or infrastructure the next benefit is scalability of the it resources you can scale up or scale down your operational and storage needs according to your
404:00 - 404:30 convenience the next benefit is it provides better productivity and collaboration efficiency for example if a team is working on a project across different location you can use cloud computing to give your employees contractors and third party access to the same file the next benefit is data backup and storage you can elaborate this by saying your data is backed up so you don't have to worry if your data is lost or deleted the next Advantage is cloud service providers provide automatic updates this would include
404:30 - 405:00 upto-date version of software as well as upgrades to servers and computer processing power now these were just some of the benefits of cloud computing we can also mention some more benefits of cloud computing so now let us move on to the next question so next question is what is eucalyptus eucalyptus is the abbreviation for elastic utility computing architecture for linking a program to useful systems it is an open-source software infrastructure that helps an implementation of clusters in the cloud computing platform it can
405:00 - 405:30 build public hybrid and private cloud and has the ability to create a data centers into a private Cloud it also helps the user to utilize the functionalities across those other organizations now these were some of the general cloud computing questions that can be asked in your gcp interview so now let us move on to a next set of questions on Google Cloud platform so first question is what is Google Cloud platform we all know Google Cloud platform is a cloud service provider but
405:30 - 406:00 just to Define it Google Cloud platform is a suit of Cloud Computing Services and management tools offered by Google it runs on the same Cloud infrastructure that Google uses internally for its end user products such as Google search Gmail Google photos and YouTube now this is a very basic question but even this can be asked in your interview the next question is what are the various services offered by gcp the various services offered by Google Cloud platform are compute Services storage and database Services networking
406:00 - 406:30 Services big data services identity and Security Services Internet of Things Services then machine learning and Cloud artificial intelligence Services moving on to our next question what is Google Cloud SDK Google Cloud SDK or Google Cloud software development kit is a set of command line tools it is used for the develop ment of Google cloud with these tools you can access the compute engine cloud storage big query and other services directly from the command line
406:30 - 407:00 so now I guess you have understood what is Google Cloud SDK so let us move on to the next question and see what is Google Cloud apis Google Cloud API a programmatic interface to Google Cloud platform Services they are a key part of Google Cloud platform which allows you to easily add the power of everything from Computing to networking to storage to machine learning based data analysis to your applications now moving on to our next question why would you prefer gcp over other cloud service providers
407:00 - 407:30 well here you are expected to say the benefits of gcp we can start the answer by saying well each cloud service provider has its own pros and cons but what makes Google Cloud platform unique is it offers a much better pricing model compared to the other cloud service providers next considering hosting cloud services gcp has an overall increased performance and service you can also mention Google cloud is very fast in providing updates about servers and Security in a better and more efficient manner you can also mention the security
407:30 - 408:00 level of Google Cloud platform is excellent the cloud platform and the networks are secured and encrypted with various security measures so I guess you got some idea about how to answer this question so now let us move on to the next question and see what is projects in gcp and how to create one a project organizes all your Google Cloud resources a project consists of a set of users a set of apis and building authentication and monitoring setting for those apis so for example all of
408:00 - 408:30 your cloud storage buckets and objects along with the user permission for accessing them all this resides in a project so in order to create a project you have to sign in to Google Cloud platform console then on the top left corner you'll have a option called as project select that and click on new project to create a new project or you can also select an existing project from the list so now I guess you have some idea about projects in gcp so let us move on to next question our next question is what
408:30 - 409:00 is cloud shell so if you have been using gcp you will know what is cloud shell so for people who don't know what cloud shell is cloud shell is an online development and operational environment which is accessible anywhere with your browser you can manage your resources with its online terminar which is pre-loaded with utilities such as g-cloud command line tool Cube CTL and many more we can also Dev build debug and deploy a cloud-based application using the online Cloud shell editor so this was about Cloud shell our
409:00 - 409:30 next question is what are availability zones and region and how many availability zones and regions are there in gcp so region is a specific geographical location where you can host your resources and availability zones are isolated location within these regions from where public cloud services originate and operate and then talking about Google Cloud platform avability zones and region it has 25 regions with 76 zones each region has at least three or more zones the next question right
409:30 - 410:00 after this could be how would you choose an avability zone or what all parameters would you consider while selecting an avability zone so you can answer this by saying you have to select the availability Zone based on the following factors the first factor is latency opt for the closest region for low latency fast connection to the servers ensures better performance in terms of quick loading and transfer time which results in overall better user experience so choose a region that is closest to the majority customer base and then the next
410:00 - 410:30 Factor you should consider is the cost different region will have different cost for the resources for example if I want to use an EAS to instance virtual machine in the US central region would cost me somewhere around $48 per month but the same virtual machine in Mumbai region would cost me $58 so you can see there is $10 difference per month in these two regions so these are the factors you need to keep in mind before selecting an avability Zone and region so these were some of the general Google Cloud
410:30 - 411:00 platform questions so let us move on to the next set of questions on compute and hosting services on gcp so the first and basic question they could ask is what is Google compute engine because Google compute engine is a primary compute engine in gcp so you can explain this in a very simple term it is a secure and customizable Computer Service that lets you create and run virtual machines on Google's infrastructure so now moving on to our next question what is Google app
411:00 - 411:30 engine so app engine is a fully managed serverless platform for developing and hosting web application at scale it allows you to choose from several popular languages libraries and Frameworks to develop your application and then the app engine takes care of provisioning servers and scaling your application instances based on the demand so now when you ask answer this question they might ask you what is serverless Computing so serverless Computing is nothing but a cloud computing execution model in which the
411:30 - 412:00 cloud provider allocates machine resources on demand which means they take care of servers on behalf of the customers so the customers can only focus on building your application by the servers and all that is taken care of by the cloud service providers so I guess you have some idea about serverless Computing so now let us move on to our next question now this question is a frequently Asked question how are Google app engine and Google compute engine different from each other you can answer this by saying Google compute engine and Google app engine are
412:00 - 412:30 complimentary to each other Google computer engine is an infrastructure as a service product whereas Google app engine is a platform as a service product of Google so now if you want the underlying infrastructure in more of your control then compute engine is a perfect choice for instance you can use compute engine for the implementation of customized business logic or in case you need to run your own storage system on the other hand you can use Google app engine if you do not want to provision and manage your servers or scale them
412:30 - 413:00 now I guess you have understood the difference between Google computer engine and Google app engine so now let us move on to our next question which is how does the pricing model work in gcp Cloud so to generally answer this question you can say while working on Google Cloud platform the user is charged on the basic of computer instances Network used and Storage by Google computer engine now you can see here I'm not specifically talking about a particular service this is just in general overview Google Cloud charges virtual machines on the basis of per
413:00 - 413:30 second with a limit of minimum of 1 minute then the cost of the storage is charged on the basis of the amount of data that you store the cost of the network is calculated as per the amount of data that has been transferred between the virtual machine instances while communicating with each other or over the network over the network means the internet you should prepare yourself with the questions on Google Cloud platform pricing models and these are among the most common Google Cloud interview questions so moving on to our next question what is Google kubernetes
413:30 - 414:00 engine Google kubernetes engine or gke provides a managed environment for deploying managing and scaling your containerized application using Google infrastructure basically in simple terms it's a platform to deploy and manage containerized applications so this was the definition of Google kubernetes engine the next question is a scenario based question so if I want to run my application on gcp which product would I use you can answer this by saying it depends on the application requirements
414:00 - 414:30 gcp basically offers four means for application deployment such as Google computer engine Google kubernetes engine Google app engine and Cloud functions you can use Google compute engine if you want to run your application on a customizable virtual machine platform next if you want to run a containerized application you can use Google kubernetes engine you can use Google app engine if you do not want to manage the infrastructure and just deploy the application without worrying about scaling your servers next with Cloud
414:30 - 415:00 function it will run your application after a event driven function that means only after a particular event occurs your application will be deployed so these were the four primary means for application deployment model you can also tell the interviewers you can use a combination of these services so let us see the next question the next question is how to migrate servers and virtual machines from premises or another Cloud to compute engine on gcp so if the interviewer asks you this question you can just say Google provides a cloud
415:00 - 415:30 software known as Cloud migrate for computer engine the software is used to migrate the virtual machines from on premises data centers or any other cloud service providers into compute engine in the gcp platform you can also mention the software is provided by Google itself and it comes with no additional cost now they can also ask this question as what is cloud migrate for computing sense ansers the answer would be the same so now let us move on to the next question and see why should you opt for Google Cloud hosting this question is
415:30 - 416:00 usually asked in Google Cloud consultant interviews the interviewer may ask this question to check your knowledge and explanation skills about Google Cloud so here talk about the advantages of choosing Google Cloud hosting the First Advantage is it provides a better pricing plans next there's a benefit of live migration of virtual machines which means you can migrate a running virtual machine to and from any cloud service providers or on premises also the next benefit is it provides enhanced
416:00 - 416:30 performance and execution it also has strong control and security of the cloud platform the next benefit is it has inbuilt redundant backup which will ensure data integrity and reliability so these were some of the benefits of Google Cloud hosting let us move on to the next question and see what are shielded virtual machines shielded virtual machines a virtual machine on Google Cloud which are hardened by a set of security control that helps them defend against threat such as malicious project
416:30 - 417:00 insiders malicious guest firmware and kernel or user mode vulnerability using shielded virtual machines help protect Enterprises workload from threats like remote attacks and malicious insiders so these were some of the questions on compute and hosting services in gcp so now let us talk about the interview questions in storage and database services in gcp so our first question in storage and database section is what is cloud
417:00 - 417:30 storage while cloud storage is a primary storage service in gcp it is a service offered by Google for storing your objects in the Google Cloud now an object is an immutable or unchangeable piece of data consisting of files of any format the object can be unstructured data such as music images videos backup and lock files or archive files also objects have two components which are object data and object metadata while object data is typically
417:30 - 418:00 a file that you want to store and object metadata is a collection of name value pairs that describes the various object qualities now you store this objects in containers called buckets so when you mention about buckets there is a high probability of the interviewer asking you what are buckets in cloud storage well buckets are nothing but a basic containers that hold your data now everything that you store in cloud storage must be contained in a bucket you can use bucket to organize your data
418:00 - 418:30 and control access to your data which means you decide who has access to your data you can create a bucket by specifying a globally unique name for your bucket also specifying a geographical location where the bucket and its contents are stored and also a default storage class so this was about buckets and cloud storage the next question we're going to discuss is what are the types of gcp storage available and in what scenarios do we use them now here we've already talked about cloud storage so we move on to the other gcp
418:30 - 419:00 storage Services now it offers Google Drive which can be used to store manage and share your personal files next we have cloud storage for Firebase which helps you manage data in your mobile applications the next storage service is persistent disk now this is a block storage which can be added to a compute engine virtual machines and last we have file store which allows you to store files or create a file based workload so these were the gcp
419:00 - 419:30 storage services our next question is what is object versioning in gcp well object versioning is used to retrieve objects which are overr or deleted so let's say I have updated a file in cloud storage now the updated file and the file before updating both version will be available to me so if the updated file gets deleted by mistake or I want to check what are the file before the update I can do that with the help of object versioning one disadvantage of this would be it increases the storage cost
419:30 - 420:00 but it would provide me security for objects when they're deleted or overr return and on enabling the object versioning in gcp bucket a non-concurrent version of the object is created every time when the object is over return or deleted so the next question in storage and database section is what are the libraries and tools for cloud storage on Google Cloud platform well you can answer this question by mentioning the libraries and tools such as console GS util client libraries and
420:00 - 420:30 rest apis now console is nothing but the Google Cloud console which provides a visual interface for you to manage your data and perform basic operational on objects and buckets next GS util is a command line tool that allows you to interact with cloud storage through a terminal next we have the cloud storage client libraries which allows you to manage your data using one of the preferred language which would include C++ C go language Java nodejs PHP python
420:30 - 421:00 or Ruby the next tool for accessing cloud storage on gcp is rest apis now you can manage your data using the Json or XML apis so these were the libraries and tools for cloud storage on Google Cloud platform so now moving on to our next question how can I maximize the avability of my data or how can my important data be more secure and available to me you can answer this by saying you can store your data in multi- region or dual region bucket location if
421:00 - 421:30 high avability is a top priority this ensures that your data is stored in at least two geographical separated region which will provide you continued availability even in the rare event of a region wide outage which includes anything caused by natural disasters so this is what gcp offers for more availability of the datas moving on to our next question what is cloudsql well cloudsql is a fully managed database service that helps you set up maintain manage and administer your
421:30 - 422:00 relational database on the Google Cloud platform we can also mention you can use cloudsql with mySQL postc SQL or the SQL Server well cloudsql is one of the core database servers in Google Cloud platform the next question is how would you choose the right Google Cloud database service while this would definitely depend on the requirement you can select from any of these options you can select either cloudsql Cloud spanner Cloud fire store data store Cloud big
422:00 - 422:30 data or Cloud memory store you can choose cloudsql when you need relational database capacity but do not need storage capacity over 10 TB or more than 4,000 concurrent connections next you can select Cloud spanner when you plan to use large amount of data which is typically more than 10 TB and need transactional consistency next you can use cloud fire store or data store when you plan to focus on application development and need live synchronization and offline support the
422:30 - 423:00 next option is cloud big table now Cloud big table is a good option if you're looking for large amount of single key data in particular which is good for low latency and high throughput workloads and the last option is cloud memory store now this would be a good option if you're looking for key value data sets and a primary concern is transactional latency these some of the Google Cloud database services so now let us move on to the next question which is a scenario based question so the question is can my
423:00 - 423:30 app engine in one region access the cloudsql instance which is present in a different region while the simple answer for this is yes if you're connecting to a MySQL instance your app engine application does not need to be in the same region and it can be running in either the standard or the flexible environment however a larger distance between a cloudsql instance and your app engine application causes greater latency for connection to the database now latency is nothing but a delay in the transmission of data so next
423:30 - 424:00 question in the storage and database section is can I import or export a specific database in Google Cloud platform well the answer for this is also yes for MySQL instances you can Import and Export either a single database or multiple database and for postgre SQL instances you can only import or export a specific database now these were some of the questions in storage and database service in gcp now let us move on to a next set of questions in networking so the first
424:00 - 424:30 question in the section is what is Google Cloud VPC now if you're applying for any Google Cloud job you're expected to know the answer virtual private cloud in gcp is a virtual Network that provides connectivity to all your virtual machine instances it could be your computer in sense Google kubernetes engine clusters or your app engine flexible environment and any other Google Cloud products which are built on the computer in sensus so you don't have to talk in detail about VPC you can just Define it
424:30 - 425:00 so the interviewers knows that you have some knowledge about VPC so the next question is how is Google VPC different from any other cloud service providers VPC so as you can see in the diagram in the traditional VPC or the VPC provided by other cloud service providers like AWS the architecture would look something like this now here in the first diagram you can see there are two VPC built with two different subnet in two different regions which are us West and Us East now the virtual machine in one region can access the internet and
425:00 - 425:30 communicate with the other virtual machine only through the VPC Gateway now this Gateway acts as an interface so now in a traditional VPC one virtual machine cannot directly communicate with the other virtual machine now in Google version of virtual private Cloud it follows a global construct which means instead of creating a VPC in Us East and US West you can just create one VPC and put the subnets in different region within that VPC now in this case the virtual machines present in one region
425:30 - 426:00 can directly communicate with the virtual machine in the other region without the help of VPN Gateway now I guess you have some idea about how Google VPC is different from the VPC of other cloud service providers now if you have understood the concept you can put the answer in your own word and explain it the next question in networking is what are routes and firewall rules now when we talk about VPC this question comes tagged along now route tells the virtual machine instances and the VPC Network how to send traffic from one
426:00 - 426:30 instance to the destination this destination can be either inside the network or outside of the Google Cloud which is typically the internet next firewall rules are rules which allow you to control which packets can travel to which destination it lets you allow or deny connection to and from your virtual machine instances based on the configuration that you specify so these were about routes and firewall roles in Google Cloud networking so our next question is what is load balancing
426:30 - 427:00 now this is a frequently Asked question in many gcp interviews load balancing is a process of Distributing the Computing resources and workload in a cloud computing environment to manage the demands by spreading the load load balancing will reduce the risk that your application will experience performance issues by using Cloud load balancing you can serve content as close as POS ible to your users we can also mention this point that cloudload balancing is a fully distributed softwar defined managed service it is not Hardware based
427:00 - 427:30 so you don't have to manage a physical load balancing infrastructure so this was about load balancing our next question is what is cloud DNS well Cloud DNS is a high performance resilient Global domain name system service that publishes your domain name to the global DNS in a cost effective way now DNS is nothing but a directory of easily readable domain names that Translate website names into numerical IP addresses which are used by computers
427:30 - 428:00 to communicate with each other for example when you type a URL in your browser DNS converts the URL into an IP address of a web servers associated with that name like www.example.com is translated to the IP address of 72221 93173 now I guess you have some idea about DNS now let us move on to our next question the next question is a scenario based question how can I connect my existing Network to Google Cloud resources you can answer this by saying
428:00 - 428:30 Google provides four options to do this the first one is through Cloud interconnect the second one is through cloudvpn the third is through direct puring and the fourth one is through carrier pairing now Cloud interconnect enables you to connect your existing Network to a VPC Network through a highly available low latency connection you can choose cloudvpn which will enable you to connect your existing Network to your VPC Network through an IPC connection next direct ping enables you to exchange internet traffic between
428:30 - 429:00 a business Network and Google at one of Google's broad-reaching Edge Network locations and the fourth option is Caro puring which allows you to connect your infrastructure to Google's Network Edge through highly available low latency connection which is provided by the service providers moving on to our next question describe some of the security aspects that a cloud offers well some of the important security aspects that a cloud offers is access control it offers the control to the admin to decide the
429:00 - 429:30 access of other users who are entering the cloud ecosystem the next security aspect is identity management this provides the authorization for the application services and third is authorization and authentication this security feature lets only the authenticated and authorized users to access certain application ations and data these were some of the important security aspects that a cloud offers moving on to our next question list some of the gcp
429:30 - 430:00 security services gcp Security Services include Cloud security Command Center Cloud armor and Cloud identity Cloud security Command Center is the tools that let users View and monitor the cloud assets and provides important security support functions like storage system scanning vulnerability detection and access permission reviews next Cloud armor is a DDOS and application defense system it is built using the same major technology and infrastructure that Google relies on
430:00 - 430:30 to protect its services including Google search Gmail and also YouTube the third security service is cloud identity now this service controls and defines the users and groups and the gcp resources they have access to now these were some of the gcp security services so now let us move on on to a last set of questions on other gcp services now this other gcp services would include big data Internet of things and Google Cloud artificial
430:30 - 431:00 intelligence so the first question in this section is what is Google bigquery bigquery is a Google Cloud's fully managed petabyte scale and cost effective analytics data warehouse that lets you run analytics over vast amount of data in new real time you can say Google bigquery is a replacement of the hardware setup for the traditional data warehouse we can also mention how the big query organizes his data we can also mention the big query organizers the data table into units that are known as
431:00 - 431:30 data sets moving on to our next question what are the big data services which are offered by Google Cloud platform well some of the services are Google Cloud bigquery Google cloud data flow Google cloud data proc Google Cloud Pub or sub Google Cloud composer Google Cloud big data and Google cloud data catalog moving on to our next question what is Google cloud data flow while this is one of the important gcps big data service you can answer this
431:30 - 432:00 question by saying Google database is a manage service for executing a wide range of data processing patterns it provides a manage service and a set of sdks that you can use to perform batch and streaming data processing task it works well for high volume computation especially when the processing task can clearly and easily be divided into parallel workloads next moving on to our next question what is cloud automl this is one of gcb's machine learning service
432:00 - 432:30 while cloud automl is a service that enables developer with limited machine learning and programming expertise to train high quality models specific you can use automl to build on Google's machine learning capabilities to create your own custom machine learning models that are tailored to your business needs and then integrate those model into application a or website or both so this was about Cloud automl let us move on to next question our next question is explain Google Cloud AI platform well AI
432:30 - 433:00 platform is a suit of services on Google Cloud which are specifically targeted at building deploying and managing machine learning models in the cloud AI platform provides the services you need to train and evaluate your training model in the cloud it is integrated with several easy to ouse tools like big quy and data labeling service to help you build and run your own machine learning applications quickly you can store and manage the large amount of data with big query and then prepare or label this
433:00 - 433:30 data for model training using data labeling service so this was about Google Cloud Air Platform now our next question is what is Cloud iot Core well Cloud iot Core is a fully managed service that allows you to easily and securely connect manage and store data for millions of devices which are spread globally it provides a complete solution for collecting processing analyzing and visualizing iot data in real time to support improved operational
433:30 - 434:00 efficiency so this was about Cloud iot Core now let us see the next question our next question is what service would you use for text analytics in Google Cloud platform so the service which is used for text analytics in Google Cloud platform is cloud natural language natural language AI enables you to analyze text and also integrate it with a document storage on cloud storage you can extract information about people places and events and have a better
434:00 - 434:30 understanding about social media sentiments and customer conversations so these were some of the important and frequently asked gcp questions so now let us move on to our next topic and see what are the skills required to become a cloud engineer so Cloud engineer is an IT professional responsible for performing technological responsibility concerning cloud computing the mainly responsible for maintenance and support management planning and designing of an infrastructure on the cloud platform so
434:30 - 435:00 now talking about the skills required to become a cloud service provider the first skill is having knowledge about cloud service provider so if you're thinking of taking the cloud computing career path spend some time familiarizing yourself with at least one of the cloud service provider the top three cloud service providers are Amazon web services Microsoft Azure and Google Cloud platform now if you want to see a comparison between these three cloud service providers I will leave a link to that video in the description box below now these cloud service providers offer
435:00 - 435:30 end to end services like compute storage databases ml migration and many more it includes almost everything that is related to cloud computing and this makes it one of the important Cloud engineer skills the next skill is programming skill this is one of the important Cloud engineer skills Proficiency in programming language is essential for scaling web application some of the programming language you should be familiar with are PHP Java net SQL Python and Ruby you can learn all of
435:30 - 436:00 these languages with the help of blogs online videos or either offline or online classes but the most important of all is Hands-On practice now talking about the third skill required to become a cloud engineer is having knowledge about the important cloud service domains now this cloud service domain would include compute which involves virtualization and serverless Computing next is cloud storage which deals with data and information which is stored in the cloud after that we have networking
436:00 - 436:30 and security Now Cloud engineer should be familiar with the basic Cloud networking Concepts and network security which includes encryption authorization and different protocols moving on to our next skill it is web services and apis Cloud Architects are heavily based on apis and web services because web services provides developers with method of integrating web application over the Internet XML soap wsdl and uddi Open Standards are used to tag data transfer
436:30 - 437:00 data describe and list services available plus you also need apis to get the required integration done now having experience of working on websites and related knowledge would help you have a strong core in developing Cloud architectures the next skill is Linux now Linux is an open source operating system that can be customized to meet business needs Linux has been increasingly adopted by many cloud service providers because of its various benefit so as a cloud engineer you
437:00 - 437:30 should be able to architect administer and maintain Linux based servers talking about the next skill required to become a cloud engineer it is devops now devop brings in the development and operational approach in one place that's easing the work dependencies and filling in the gap between the two teams devops approach has been increasingly adopted by many top companies and gets really well with most of the cloud service providers now these were some of the important skills required to become a cloud engineer and with this we' have
437:30 - 438:00 come to the end of our session I hope it was helpful happy learning I hope you have enjoyed listening to this video please be kind enough to like it and you can comment any of your doubts and queries and we will reply them at the earliest do look out for more videos in our playlist And subscribe to Eddie Raa Channel to learn more happy