Advanced Resource Management Techniques in Cloud-Fog-Edge Paradigms
Lecture 43 Resource Management - II
Estimated read time: 1:20
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.
Summary
In Lecture 43, the focus is on advanced resource management in the Cloud-Fog-Edge computing paradigm. The lecture explores different parameters affecting resource management, particularly in fog and edge computing, which are more resource-constrained than cloud computing. It examines service placement, resource allocation, and optimization strategies to enhance system efficiency across various application scenarios. The discussion includes case studies on workloads like face recognition and caching mechanisms, highlighting methods for distributing computational tasks from cloud to fog and edge layers. The lecture emphasizes the importance of control mechanisms, both centralized and distributed, system software for effective resource management, and the role of algorithms in facilitating optimal performance and load balancing.
Highlights
The lecture continues the discussion on resource management focusing on fog and edge computing due to their resource constraints. π
Cloud-Fog-Edge systems apply to various domains like healthcare and traffic management for better data handling. π₯π¦
Service placement and resource allocation strategies differ between cloud, fog, and edge to optimize efficiency. π
Example scenarios include video analytics and face recognition, discussing how tasks are distributed between layers. πΈ
Various considerations like latency, bandwidth, scalability, and mobility awareness are crucial for system efficiency. πΆ
Discusses control mechanisms, emphasizing on both centralized and distributed approaches for managing resources. βοΈ
Underlines the importance of algorithms in enhancing system performance, load balancing, and the discovery of resources. π
Middleware and system software ensure smooth operations and system reliability in a Cloud-Fog-Edge paradigm. π οΈ
Key Takeaways
Understanding the Cloud-Fog-Edge paradigm is crucial for efficient resource management. π
Fog and Edge computing handle more constraints than traditional cloud computing. π¦
Service placement is critical for optimizing performance and efficiency. π
Control mechanisms can be centralized or distributed based on application needs. ποΈ
System software and middleware play key roles in managing resources and maintaining scalability. π₯οΈ
Algorithms are vital for load balancing, resource discovery, and optimization. π
Sensor and IoT devices enhance computing capabilities at the edge. π‘
Clever resource management leads to lower latency and better bandwidth utilization. π
Overview
In this engaging lecture, we delve into the intricacies of resource management within the Cloud-Fog-Edge paradigm, a multifaceted approach crucial for modern computing environments. Our journey begins with an exploration of the unique challenges posed by fog and edge computing due to their inherent resource constraints. These layers, while essential, demand innovative management strategies to function optimally alongside traditional cloud infrastructure.
By dissecting case studies such as face recognition and video analytics, we uncover how service placement and resource allocation can drastically enhance performance. The lecture illuminates a variety of optimization strategies, aiming to achieve seamless operation across cloud, fog, and edge layers, thereby catering to an array of application scenarios from healthcare to disaster management. These examples underscore the importance of strategic resource distribution to meet diverse computing needs.
Further, the discussion takes us through the role of control mechanisms, system software, middleware, and the pivotal algorithms that drive efficiency in these systems. Whether through centralized or distributed control models, the lecture emphasizes maintaining scalability, ensuring low latency, and optimizing bandwidth. Itβs a comprehensive dive into the technological backbone that holds promise for future innovations in managing networked resources.
Chapters
00:00 - 01:00: Introduction to Resource Management in Cloud-Fog-Edge Paradigm The chapter introduces the concept of resource management within the Cloud-Fog-Edge computing paradigm. It serves as a continuation of the previous lecture's discussion, aiming to deepen the understanding of how resources are managed across these interconnected layers.
01:00 - 02:30: Key Considerations in Cloud-Fog-Edge Paradigm The chapter explores key considerations in the Cloud-Fog-Edge paradigm, emphasizing resource management. It focuses on understanding how different parameters influence the mechanism within this paradigm, particularly from the fog and edge perspective, which are more resource-constrained compared to the cloud.
02:30 - 04:00: Service and Data Offloading The chapter discusses the concept of service and data offloading in the context of different layered systems such as cloud, fog, and edge computing. The primary goal of such systems is to enhance the efficiency of various sectors including healthcare, traffic management, regular networks, and disaster management. This involves processes like data acquisition, inference, and delivery to the end-users, aiming at optimizing overall system performance.
04:00 - 05:30: Introduction to the Service Placement Problem This chapter introduces the Service Placement Problem, emphasizing the relationship between different devices and their capabilities. The focus is on the idea that devices, whether smart or traditional, often have surplus resources such as CPU and memory. These resources can be leveraged to execute various tasks, thereby addressing the Service Placement Problem efficiently. The narrative highlights the role of resource-rich devices in enhancing the overall service execution process.
05:30 - 10:00: Case Study: Video Capture and Analytics System The chapter focuses on the case study of a Video Capture and Analytics System. It covers various aspects such as the continuous extension of services, data, and considerations related to hardware, software, and algorithms. Keywords and topics previously covered may also be revisited in this context.
10:00 - 12:00: Application Placement Problem The chapter covers the Application Placement Problem, which deals with the optimal placement of resources across edge devices, fog devices, and the cloud. It emphasizes the role of the fog orchestration engine in managing these placements.
12:00 - 14:00: Optimization Strategies for Resource Management This chapter discusses optimization strategies for resource management. It focuses on achieving efficient load distribution to avoid skewed or uneven allocation of resources. The chapter emphasizes optimizing both time and energy utilization to enhance overall efficiency.
14:00 - 16:00: Offloading Techniques and Control Mechanisms The chapter titled 'Offloading Techniques and Control Mechanisms' discusses the architecture involved in processing data using various devices dispersed across different levels of a network. At one end, there are user devices, and at the other end, cloud data centers or clusters. In between, there are various intermediate devices which could include network devices with embedded resources, dedicated fog devices, or data caching nodes. These elements collectively work to facilitate effective data processing and management across the network.
16:00 - 19:00: Hardware and System Software in Fog-Edge Computing The chapter discusses the service placement problem in fog and edge computing environments. It contrasts with traditional cloud-only arrangements, explaining that in fog and edge computing, applications can be distributed across various nodes instead of being centralized in the cloud. This distribution aims to improve processing efficiency and reduce latency. The material builds on concepts discussed in previous lectures, focusing on optimizing how tasks are distributed in these computing frameworks.
19:00 - 21:00: Middleware and Algorithms in Fog-Edge Computing This chapter discusses the placement of services within the fog and edge layers in computing systems, highlighting that the decision varies according to different applications. Some computations may occur at the cloud, while others could be processed closer to the edge. The chapter emphasizes the importance of virtualized resources within fog computing, providing computational power and storage as needed.
21:00 - 25:00: Conclusion and Future Directions The chapter titled 'Conclusion and Future Directions' focuses on the integration and interoperation between end-user devices and cloud servers. It highlights the importance of an intermediate layer that facilitates compatibility between these two layers, especially with the presence of edge and IoT devices. The discussion emphasizes the need for a virtualized platform to enable this collaboration effectively.
Lecture 43 Resource Management - II Transcription
00:00 - 00:30 [Music] [Music] hello so we will let us continue our discussion on ah resource management in the cloud fog age paradigm ah rather we will be continuing our discussion what we are talking about in our previous lecture ah in this lecture
00:30 - 01:00 also so our basic consideration here is that what are the different parameters which influences ah this overall mechanism of this cloud fog age paradigm or more specifically when we are now looking at the resource management more we are ah looking at that on the point of fog and age ah because these are more ah resource constraint unlike the cloud so if we if we recap quickly um
01:00 - 01:30 our basic overall consideration in this situation is that in having this multilayers thing like for cloud at the top then fog and age we want to have a better efficiency of the overall system right let it be healthcare system let it be traffic management let it be regular networks or even disastrous management any type of things where we require a different type of data acquisition gathering ah inference drawing delivery to the end
01:30 - 02:00 user or actuating some other devices how this ah device how this combined things will work ah faithfully right ah other thing definitely motivated by resource ah rich this what we say end devices sometimes smart devices or iots or intermediate devices where the surplus resources like cpu compute like computing or memory etcetera are available so that we can leverage on
02:00 - 02:30 this so ah it is continuous so the same type of ah extension of those things we will be little looking at the service and data of loading hardware software consideration ah type of say algorithmic consideration and so and so forth right and also this keywords also remain same so if we ah look at the
02:30 - 03:00 so a few recap of the previous slide so that we are in sync right so if you look at the re resource placement problem so if you if we see that it is from ah this end that edge devices or this different sensors and iot devices and the other end is the cloud intermediate we have fog devices and we ah require some of the things here in this case what is ah referred as the fork orchestration engine or orchestra which
03:00 - 03:30 orchestra which try to see that how efficient it can be there so it can be the load should not be very much skewed or equidistable distributed with the things and ah and overall ah efficiency in time of say in in terms of ah time or in terms of ah energy utilization and things like that is optimized right so
03:30 - 04:00 a variety of devices at this end and cloud usually we have data centers or there can be clusters or different data centers out there and intermediate we have a variety of devices right primarily what we see these are mainly ah this can be this ah intermediate network devices which have resources to that or there can be also dedicated ah fog devices and different data caching nodes and type like this
04:00 - 04:30 so ah service placement problem in more specifically as we are telling that fog and age so what we had what we are trying to see like last last lecture if you remember so if i have a cloud only situation ah things are like that whatever you are doing it is going to the cloud and getting executed right now you what we are seeing that given a overall application we want to look at it that it can be distributed
04:30 - 05:00 over fog layer and the edge layer right so in other in order to do that which service to be placed where is a big consideration right and it also varies from ah from application to applications right so some of the things which may be need to be calculated at the cloud end itself some could have been calculated the age and down the line right so the fog ah computing as you see it is highly virtualized right that offers computational resources storage ah
05:00 - 05:30 control between end user and the cloud server so it is intermediate layer as we discussed in last lectures so it basically try to give a compatibility or inter operate between the two layers on set cloud and other is end user or where the edges are there and iot devices so its a so it need to be it is a very so called virtualized platform so and what we
05:30 - 06:00 see that our so called ah conceptually ah centralized cloud and need to now work with this sort of a distributed age devices and some of the analysis required to be calculated in a centralized fashion where some can some of the call or some of the functionalities can be done at the much lower level so in order to do that so when we
06:00 - 06:30 go for say distribution of resources so we have several considerations like ah so some of them where which are pretty prominent are is ah location awareness right where where it is there and location awareness of the things like especially if you say regular ah network the vehicle needs to communicate to the nearest rsu and type of things right and the vehicle is on the move then things goes anything a hospital a a
06:30 - 07:00 ambulance on the move needs to communicate the say patient related information health information to the hospital ah concerned hot food cells or health care centers so has to take this data to the intermediate say fog devices so which which are more ah in the vicinity or it need to be aware of the location right in the both side like if we if i know that mobility pattern of the ambulance then i can
07:00 - 07:30 basically know that where it is likely to be there and where the data transmission data uploading requirement will be there right and of course we what we try to see is in order to do that there is low latency right so that is one of the our broad goal that low latency and so that our overall means latency is optimized and one aspect is that one other aspect is
07:30 - 08:00 that better bandwidth utilization in doing so i should be again optimally using the bandwidth right system should be scalable right that is one requirement that rather that is the paradigm what we started with the cloud computing over all this discussion that is one of the major aspects of whole thing is a scalable now in order to doing this if you if we ah compromise the scalability of the things then ah it may not make sense for the user at large
08:00 - 08:30 and with today's world where mobility is the order of the day so it should be mobility aware or it should be it has should have a proper support for the mobility right so let it be whether it is a regular network or somebody some a person moving for for one place
08:30 - 09:00 to another so whether it is a some other type of situation so mobility should be ah is a inherent thing so that means in other sense ah the whole whole ah this paradigm should be aware of the things that there is a ah there can be a mobility of the underlining devices there can be a situation where the your fork devices may be may not be very static but nevertheless we still consider the cloud should be somewhat
09:00 - 09:30 centralized and ah static now ah again if we refer as as i mentioned in my last lecture we are referring to this ah work which is very recent work on a survey work which accumulates different research things ah i also encourage you to refer this paper which is available on the internet right ah very nicely written ah so if we look at the service placement problem so if we
09:30 - 10:00 if we try to segregate it is a one is definitely the looking at the problem statement then the what is the basic taxonomy of the service placement what are the different optimization strategies and evaluation environment so these are the four major group what they refer to also what we see that this can be a thing and if we look at the problem problem paradigm so it is it can be a looking at the infrastructural model application model and deployment pattern right how the overall problem is
10:00 - 10:30 there and service placement taxonomy so how the control plane has to be is designed ah what are the placement characteristics ah that overall system dynamicity and of course this mobility support right and when you look at the optimization ah strategies so what are my optimization objective any optimization problem i need to look at the what are the
10:30 - 11:00 optimization options and how what are the metrics for that and accordingly how i can formulate the problem and ah what are the this resolution strategies and similarly evaluation environment analytical tool ah whether i have support of simulators of looking at it and experimental test beds whether things are like that is there so deployment thing so one example where which is referred in this particular paper ah let us it is a
11:00 - 11:30 it will be good to look at it to have a more clarity say that is something ah a one video capture and [Music] analytics and recognization system is there so what it goes on it captures the vision right ah then like ah [Music] and then goes for a course feature extraction from the thing right what are the core species so it may be here we they refer to a google glass so a b may
11:30 - 12:00 be the things could have been done out here in this particular device itself then it goes for a face recognition right so i want to go for a face recognition taking image from the go through the google glass and if we go for the phase recognition then it requires some more resources right the google glass may not be able to do that so we fall back to something smart wi-fi gateway right it it can be at the fog level or if i if i
12:00 - 12:30 consider these are the devices which are the iots and sensors and then we can consider that as a at the age also devices right it as i told you that there are different people look at it in different way nevertheless it is not in the device itself is out of the device right and if it is ah so rather it is done here in a laptop which is ah connected to a another device so it is it can be considered here as a
12:30 - 13:00 ah edge computing ah platform which supports this through a network device right so this smart a wi-fi gateway is the ah gateway where which the traffic is moved to this device for the things ah i could have pushed it to the cloud for recognitions right but it can be done ah in a much lower level right even this device had it been a more resourceful but anyway fast recognition is not that ah straight forward job that we can run on
13:00 - 13:30 ah some devices like ah wi-fi gateway so there are other things like object recognition so what let us look at this left side so i require vision capture we require a course ah feature extraction like ah what are the things i coursely i want to feature extraction it has two three things one is that face recognition that if there are ah faces ah objects like this there is a object
13:30 - 14:00 recognition if there are objects and there are ocr or character recognition if there are written things so these three domains are if you those who have worked on this these are ah separately a large system building domain right they are they itself makes separate segment of ah development things right so these three are there once this is done we need to need a learning based activity inference right so based on this i want to do some sort of a learning based activity
14:00 - 14:30 inference and then with those and this inference we want to render and we can have a effective display of the things so what it is happening it is capturing the thing recognizing rendering the thing and putting appropriate display on the thing so that ah it ah i get a feel of it or we get a display out of it suppose this is my activities right so our thing is not this ah this ah application is not the target it if
14:30 - 15:00 it has this type of components so how we can map to our age fog cloud type of things so as we see that in the device itself this things can be there a b c a b i ah visceral course correction and ah this inference can be there in this particular device itself right and then we have this c here in the in this local loop i think there is a type wire
15:00 - 15:30 here it should be ah h right finally it renders and affect the display out here itself right so it should be h it was type o as given as a i right nevertheless so these components can be worked out here i can distribute this c component in a local laptop which is connected to a smart gateway or local what we say computing server and this is a if we look at this rendering process which is which can be done out here
15:30 - 16:00 right and whereas ah at the back end cloud what we require this ah this learning based activity inference has to be done at the backend cloud two reason one is the may be the computing ah need may be much higher right ah which cannot be done things like activities like this in some of the things that it may require some of the other ah data from other sources right it may not
16:00 - 16:30 be true for this in some cases i require the data from other sources may be some models maybe some other external data which may help me in the in that particular process which has to be in the cloud other than this object recognition it requires a much little more resources that can be done on a some sort of a land router there are there can be cloud late ah which were rather people
16:30 - 17:00 used to say that the clouds were more popular when this fog and age things were not much into the things so the large cloud base with the cloud led switch does that so nevertheless we can have ocr type of things in the cloud lets i can also have ocr type of things in some other type of computing devices which can do this that what it is doing as a face recognition can do the ocr recognition also right and doing all those things finally we
17:00 - 17:30 have a ah rendering which the this ah smart in this case is smart wi-fi gateway does it and finally this is a display effect which this i should have been h is done out here right so what it is ah what we are trying to showcase what we are trying to show ok what we are trying to look at it that a particular job of this capturing to having if
17:30 - 18:00 means display effect of the thing has several components there is a very example scenario it can be so these components could be divided into the one way is that i could have pushed everything to the cloud and that cloud could have computed things and bring it back in this case we distribute it in the ah different layers right some at the edge some at the fog some pushing at the cloud what we try to achieve we try to achieve that better efficiency less ah latency
18:00 - 18:30 ah better scalability of the whole system and it may it it ah also it may support the mobility in a better way like if this ah person with the google glass google glass moves from one place to another so another this wi-fi router can do it and work on the things etcetera right so that can be there so try to achieve nevertheless dividing this placing this different application different your activity into different layers is
18:30 - 19:00 one of the major thing so ah if we see that application placement problem defines the mapping of pattern by which ah application components links are mapped into the infrastructure graph right so i have infrastructure graph that computing devices physical edges and type of things how it is mapped that is the thing we want to do so application plate means in finding the available resources in the network that is
19:00 - 19:30 i need to find out that where are the resources are there not only the resources how much free they are and type of things ah and there are there can be ah like application requirements ah which should not suppose a i require for phase recognition ah i require a ah memory space of say minimum ah 500 mb or so right and the resource where you want to run it that stuff it
19:30 - 20:00 is not having that free space so that that it will not satisfy the things so what we require in some sort of a ah overall manager or some sort of a ah orchestration engine which try to handle those type of things right it can be a centralized thing or maybe a thing or it can be a distributed and collaborative stuff but it requires right so service provider have to take into account this constraint like limit
20:00 - 20:30 of this space providing an optimum or rear optimal ah placement and type of things so what we see that there are several constant one is that resource level constant like infrastructure is limited by the finite capability in terms of cpu ram storage bandwidth etcetera so resource level constant network level constraints right such as latency bandwidth and so and so forth which are at the network level and there can be
20:30 - 21:00 application level ah constant like locality requirement restrict certain services execute in the specific location etcetera right ah delay sensitive some application can be specific for a deadline and to be done in a things right some of the things that i i some of the computing things ah i do not want that need to be at a should be done ah much within my control area like i can say that if it
21:00 - 21:30 is from the if something is there ah to be distributed but this applications you cannot ah leave say iit kharagpur network or computer science ah departmental network whatever has to be computed has to be computed here or i require some of the things which is more time sensitive or delay sensitive some application so i need to do something which is ah appropriately so so there are several constant as if you remember in other placement problems
21:30 - 22:00 also this type of constraints are are there so as if you refer this particular paper if you see that the service placement taxonomy we have this control plane device design that how overall this control to be managed ah we have ah placement characteristics right system dynamic and mobility support as we are talking about ah in the initial
22:00 - 22:30 things so if we look at the optimization strategies ah already we discussed ah some of the things so our objective is this lascency resource utilization cost and energy consumption right so these are the typical ah optimization strategies not only here in different type of when we do have distributed environment we do have this sort of optimization strategy we do have this type of thing so based on that your heuristics or algos need to be designed
22:30 - 23:00 like how things will be there ah like in the how i can optimize this service overall application placement so that we can have in the cloud fog edge environment a overall a a optimal service into things so one one when we talk about all those things what we are trying to do we are primarily ah of loading some of the
23:00 - 23:30 things right i could have done here i am of loading somewhere else right like everything could have been done in the cloud right so we are offloading something on the ah [Music] on the fog some of the thing on the edge computing in order to attain those ah efficiency right so its a of loading as we know its a technique in which a server and application and associated data are moved into ah the age of the network
23:30 - 24:00 right so i i go on of loading the things rather ah so primarily it augments computing ah requirements of individual or a collection of user devices bring the services in cloud that process request from the device closer to the things so if we look at it it it has a two component one is from user device to the edge right so augments computing ah in the user device by making ah use of edge
24:00 - 24:30 nodes right usually a ah this user to age is a single hop so one hop away of the things or in other things the device where it connects to the next things or operating from the cloud to the edge device so the a work load is moved from the cloud to the edge right so it is maybe server of logging there can be caching mechanisms where which helps me in helps us in doing this of loading things in other things so uploading is a
24:30 - 25:00 [Music] well a popular or a i should not use the word popular but it is a well studied or well researched area where where i how i can offload and how much i can offload things are like that two things are important when i am offloading i may be may may be of loading the whole thing or i may be uploading the part of the things so once i offload i need to ah
25:00 - 25:30 again the result and output i need to ensemble also right i need to aggregate those things so those those things will come into play so ah this is the big picture of the things so what we see that from age to device to edge and the cloud to edge and application partitioning and caching mechanisms are the popular things when we go from the device to the edge that is ah in divisor user devices to the edge
25:30 - 26:00 and then you both of them have different way of [Music] different characteristics and different methods to approach the thing ah to handle the things from the cloud to edge we have the server uploading so it can be replication of the server to some of the edge things or it can be partitioning of the things like the partitioning into ah in of the
26:00 - 26:30 this server into different age devices or some of some part of the things we have the also cashing mechanisms where this content popularity based or multi layer based gasing mechanisms which are there so another aspects of ah this overall scenario is the control like how the control is managed whether it is a centralized control or distributed control right so two two
26:30 - 27:00 aspects are there one as as we are discussing that overall this management can be in a centralized way the overall control can be several centralized way ah or it can be a ah distributed ah way of managing the things both has different characteristics like ah things like solve solver based approaches or crafts matching based approaches are there when you go for centralized whereas in the distributed block chain gram theograph ah game
27:00 - 27:30 theories game theoretic or genetic algorithm based different ah approaches are there so we are not going to the nitty gritty of the things our our first thing is that that should be there need to be a control mechanism otherwise we cannot achieve this sort of a i could not manage this overall ah this type of service placement and [Music] this managing the thing on the cloud fog edge type of things and it can be
27:30 - 28:00 centralized or distributed ah control mechanisms so two aspects always come in hand in hand is one is the hardware part of it right or hardware infrastructure another is the software or the system software part of it right apart from the application software etcetera which are running so the fog edge computing from the computing environment that uses low power devices right so usually ah
28:00 - 28:30 which are their low power devices like ah it can be mobile devices it can be routers ah something gateways some home systems and type of things right so ah in other sense what we are trying to leverage upon that something is already there having excess resources why cannot we use it so low power devices a low resource devices right so combination of this so called low or small form factor devices connected
28:30 - 29:00 with the network enables the cloud computing environment that can be leveraged by a rich set of application processing inter or processing the internet iot and cps type of data right as huge volume of data being generated this iot devices and different cyber physical systems along with the cloud with this battery of devices with appropriate control mechanisms service placement strategies etcetera we want to have a overall things for that i require my our
29:00 - 29:30 hardware infrastructure should be compatible one is that compu hardware computing devices another is network devices right single ah board computers or community products those type of things can be there that where the hardware is the computing otherwise my network devices like we seen in the example couple of slide back that gateways routers ah wi-fi access points ah age racks all those things will be there
29:30 - 30:00 other part is the system software for ah for this fog age or i should say cloud foggage ah paradigm ah manage the computing network storage devices this resources the one is the hardware part and i we require a system software which manage this whole whole show right so systems often needs to support multi tenancy and isolation right that is that is one of the requirement if you remember ah
30:00 - 30:30 in in our earlier lectures also we discussed about things when we look about the cloud computing infrastructure what we look at right especially when we are looking at the ies type of deployment so there is two broad category one is the system virtualization so system level virtualization what we have seen earlier also or ah network virtualization right and if we look at the overall picture so system virtualization where we have vms
30:30 - 31:00 virtual machine containers we will we discuss something on containers we will be taking up containers little bit more virtual machines and ah container migration how things will this this is a major ah another big aspects of ah looking at the i means of to the research community that how this migrate in appropriate way and for network virtualization sdn or software defined network or network ah function virtualization so this is also
31:00 - 31:30 another big area of research and big area of ah means virtualized virtualization world that is network function virtualization and sdn as you know that it it has it is probably fitting in a big way ah these days all modern switches are mostly sdn enabled where the control plane and data plane are being segregated ah so for overall better management of the network management and also we have overlay
31:30 - 32:00 networks another part if we need to mention ah it will be and means in appropriate not that is the middleware so it provides ah complimentary to the services to the overall systems of trust so system software is is the handling the this devices and allowing this virtualization at system level and network level whereas middleware in fog edge computing provides performance monitoring ah coordination orchestration
32:00 - 32:30 communication facilities protocols etcetera so that is very important right though the system software plays the say big brother role the this middleware actually supports it to have a faithful or reliable scalable operations right so there are different [Music] components like volunteer edge computing hierarchical fog edge computing mobile fog edge computing ah cloud orchestration management and so on and
32:30 - 33:00 so forth another aspects as we look at the hardware infrastructure and other things other aspects is the algorithm so algorithm used to facilitate ah fog edge computing so so it has four major components so to say one is discovery identifying the edge resources within the network that can be used for the distributed computing benchmarking another important aspects the capturing performance of resources for decision
33:00 - 33:30 making to maximize performance of means what you what we are trying to do placement deployments etcetera right so there should be benchmarking of the ah resources which is uh purpose of the performance of the resources which are at your hand load balancing so that it is not skewed distributed workload across resources based on ah different criteria like priority fairness and etcetera and finally the placement right ah what should be the algorithm for placement identifying resource appropriate for deploying a workload understanding that
33:30 - 34:00 what is the requirement of the workload and visa vis that how your things for this what we require at different set of algorithms and also they have again major components like if we look at the discovery what is the programming infrastructure handshaking protocol message passing and similarly for benchmarking load balancing placement and things like that so this is the ah given that hardware given that system supplied even the middleware in place and all these things how i implement
34:00 - 34:30 actually how this overall things what should my algorithm so that part of the things right so that is another important aspect so what we try to see that overall in this overall resource management paradigm what are the major players and what what are the major considerations right they are in the sense what are the major considerations when we look for this resource management in a cloud fog ah age paradigm more ah as we are ah looking at that that ah as the resource
34:30 - 35:00 constraint wise is fog energy is more resource constant so that is more is looked into rather than it need to appropriately synchronized with the cloud based on the resource availability and things like that so that we give something in a efficient way right so with this ah we conclude our discussion today we will continue our things in the next class there are few next session there are few
35:00 - 35:30 references these are very nice references i encourage you to look at those papers thank you