Lecture - 41 Cloud–Fog Computing - Overview

Estimated read time: 1:20

    Learn to use AI like a Pro

    Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo
    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo

    Summary

    In this lecture, we explore the concept of cloud-fog computing, distinguishing between cloud and fog paradigms and why fog computing is increasingly vital. The lecture emphasizes fog computing's role in reducing latency and network congestion by executing data processing closer to the data source, thereby enhancing real-time application performance. A case study in a health domain illustrates the practical advantages of utilizing fog computing in conjunction with cloud computing, demonstrating improved efficiency and reduced costs.

      Highlights

      • Cloud computing provides infrastructure as a service, enabling access to software without managing resources. 💻
      • Fog computing processes data at the network edge, reducing the need to send everything to the cloud. 🌫️
      • Fog computing supports real-time applications by lowering latency and decreasing network congestion. ⚡
      • The interoperation challenge appears when fog nodes vary in homogeneity, affecting data handling. 🤔
      • Fog computing saves energy and costs by processing closer to the source, reducing cloud dependencies. 💰

      Key Takeaways

      • Cloud computing offers a range of services from infrastructure to software delivery, enabling on-demand access. ☁️
      • Fog computing acts closer to the data source, reducing latency and increasing efficiency. ⏲️
      • The synergy of cloud and fog computing enhances performance, especially in real-time scenarios such as health monitoring. 🏥
      • The fog layer can offload tasks from the cloud, providing faster response times and decreasing data transmission loads. 📉
      • Interoperability and latency are ongoing challenges in cloud and fog computing, but potential benefits make it worthwhile. 🚀

      Overview

      Cloud computing allows users to obtain computing resources like servers and storage as services, reducing the need to manage physical infrastructure. Key models include Software as a Service, Platform as a Service, and Infrastructure as a Service, each offering varying levels of control and flexibility.

        Fog computing, by processing data at or near the source (the 'edge' of the network), reduces the need for data to travel long distances to centralized cloud data centers, leading to lower latency and reduced bandwidth usage. This is particularly advantageous in scenarios requiring real-time decision-making, such as in IoT applications.

          A case study within a health context reveals the benefits of fog computing alongside cloud systems. By processing data locally when possible, fog computing can reduce the load on cloud systems, cut down on data transmission expenses, and enable faster access to crucial insights, illustrating the practical utility and strategic advantage of a combined fog and cloud framework.

            Chapters

            • 00:00 - 03:00: Introduction to Cloud-Fog Computing The chapter introduces the course on cloud computing, focusing on the cloud-fog computing paradigm. The discussion highlights the importance of this paradigm, implying that students have already been introduced to the basics of cloud and fog computing. The chapter is likely setting the stage for more detailed explorations in subsequent lectures.
            • 03:00 - 07:00: Cloud Services Models In this chapter, the focus is on understanding cloud services models, particularly addressing issues like performance and latency. The discussion delves into why these aspects are crucial in the paradigm of cloud services, especially in relation to edge sensors and other systems. The chapter promises an exploration of these topics in more detail during the lecture.
            • 07:00 - 17:00: Fog Computing Explained This chapter provides an overview of fog computing, focusing on its application in the health domain. It elaborates on a case study involving a health cloud fog framework, sharing insights and work done in a laboratory setting. The chapter aims to help readers understand the significance and practical implementation of this framework within healthcare.
            • 17:00 - 23:16: Fog Computing in Practice The chapter titled 'Fog Computing in Practice' begins with a discussion on the performance-related issues connected with fog computing, acknowledging its potential. It provides a quick recap by contrasting it with cloud computing, particularly emphasizing the concept of 'anything as a service' model in these paradigms.
            • 23:16 - 35:00: Case Study: Cloud-Fog Applications The chapter discusses different service models in cloud computing, emphasizing particularly on Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). IaaS provides the essential infrastructure for computing, PaaS offers a platform to develop and manage applications, while SaaS allows users to access software applications over the internet without concerning themselves about underlying hardware and software management.
            • 35:00 - 49:00: Challenges in Cloud-Fog Computing The chapter titled 'Challenges in Cloud-Fog Computing' explores the complexities and considerations involved in implementing software solutions across cloud and fog platforms. It discusses the desire for seamless integration and functionality of tools and software in these environments. The chapter also delves into Platform as a Service (PaaS), which offers a framework for developing and executing applications without worrying about underlying infrastructure. Additionally, it touches upon other services like Storage as a Service, highlighting the range of tools available for efficient operation in cloud-fog computing contexts.
            • 49:00 - 56:00: Case Study and Experimentation The chapter 'Case Study and Experimentation' delves into the concept of cloud services and their characteristics, focusing on models like database as a service. It explains the 'pay as you go' model where you pay for the service as you use it. Key characteristics of these cloud services include on-demand self-service, which requires minimal management intervention, resource pooling, broad network access, and rapid elasticity. These features make the services ubiquitously available, allowing for efficient and dynamic resource use.
            • 56:00 - 59:00: Conclusion and Future Lectures The chapter concludes by discussing the evolution of distributed computing into the modern cloud infrastructure. It highlights how cloud services, provided by companies like Amazon and Google, have become integral parts of computing environments. The chapter also suggests a transition from traditional distributed computing to these more scalable, resource-efficient solutions, hinting at future lecture topics that will likely explore these advancements in greater depth.

            Lecture - 41 Cloud–Fog Computing - Overview Transcription

            • 00:00 - 00:30 [Music] hello so welcome to the course on cloud computing today we will be discussing on cloud for computing paradigm why it is important so already you have gone through the basics of and some of the aspects of cloud and fog computing will be in in last in coming couple of ah lectures we
            • 00:30 - 01:00 will be discussing little more in details that how this overall performance issue latency etcetera are ah makes sense while we have this type of paradigm why this is important and why we have need this sort of a thing so more to say it is a cloud for age sensors and other systems will come into play ah let us see that during the lectures that will ah look into the things so primarily we have a
            • 01:00 - 01:30 overview and also will look into a ah case study ah primarily look look into a health cloud fog framework ah some some of the things that we will share that some of the work we did ah in our lab ah out here for ah realization of this cloud fork paradigm for a health domain so it may be in a lab setup but i i believe that that will give you a good understanding that how why this sort of a ah framework matters and also will
            • 01:30 - 02:00 look into some of the performance related issues this ah particular paradigm have lot of promises to give so these are some of those keywords ah which will be there so ah to start with ah let us have a some quick recap ah so when we talk about com cloud computing ah it comes as a something as a service rather if we say anything as a service which comes into
            • 02:00 - 02:30 play so three prominent predominant services or three tp dominant model what we look into is one that ah software as a service rise these are the different service model and ah or platform as a service and ah infrastructure as a service so if we look into the overall thing so infrastructure as a service gives me that basic infrastructure to do this computing thing when you have a platform as a service so we have something or rather as a software as a service i do not care about the
            • 02:30 - 03:00 infrastructure and things like that i directly want to have that realization of the software into the thing ah working of my tools and softwares over the over the this computing platform and ah in between we have this platform as a service which help me which provides me a platform may be for development for execution some of the things ah to work on the things other than thing we have lot of other things like we are used to like sort of a ah storage as a service
            • 03:00 - 03:30 or something like ah database as a service so what we are saying that somewhere ah ubiquitously the services are available and i can call that service and pay par use right as or pay as you go type of model we pay as far as we use and we know that some of the major characteristics are one is on demand self service that means with minimal management interventions things should be there resource pooling broad network access rapid elasticity and
            • 03:30 - 04:00 measured services right ah though it is ah what we say in a it came from a distributed computing ah gradually distributed computing platform but if you look at ah in some aspects that it is some sort of a some sort of a ah what we say ah as far as the cloud is concerned when we when we work with it say suppose i hire a cloud from say either amazon or google or my maybe we have a
            • 04:00 - 04:30 in iit kharagpur we have developed a cloud or a cloud infrastructure with ourselves with using open source that is meghamala whatever what we whatever we look at as when i am working is some sort of a centralized stuff not only that it is somewhere there so i have i need to send my data out there and get the result out of it if it is a something which is for computing right ah however in some cases this network delay may be ah
            • 04:30 - 05:00 much more right with all these different type of ah advantages what we are looking at ah is instead of having owned infrastructure i can have a ubiquitous infrastructure which help me in ah increase mobility lease lease botheration of my maintaining the infrastructure i just hire infrastructure like computing as a service so with additionally there are issues of where we have these latencies and other things
            • 05:00 - 05:30 into play in some scenario especially mission critical scenarios and like that what we feel what what is seen that this may be sometimes ah more challenging of having this sort of a ah data traversal over network and delay etcetera so another thing which came up ah in after this cloud computing and what we studied is
            • 05:30 - 06:00 the fog so as the name suggests cloud is up in the sky and then fog is something ah little down or more near to the ah surface or moon near to us right so it is a model ah in which data processing and applications are concentrated in devices at the network edge rather than the existing almost entirely on the cloud so it is it is something where in the cloud so i we
            • 06:00 - 06:30 want to have some realization of this say in the processing term or analysis terms or the data somewhere in between right ah so in other sense what we see that instead of travelling direct to the cloud i if i can have in between this type of services it may make sense for better performance right so the term fork was originally coined by cisco cisco systems
            • 06:30 - 07:00 as a new model to ease wireless data transfer to distributed devices in the in iot paradigm right so as as we have huge number of sensors and sensors actuators which are collecting data and being the data being analyzed and may be actuated for some other purposes so what what is seen that there is a tremendous data ah flow is in the network right with the
            • 07:00 - 07:30 increasing number of sensing devices and ah so cisco with a with their huge ah infrastructural ah this network infrastructure what is ah what is seen that whether whether it can be used that unused or used resources in the network can be used in the things especially look at the nodes like routers etcetera which may be having you know good amount of computing resources but for this transfer may not be there so i can i can
            • 07:30 - 08:00 ah do it in a at a much lower level so what what happens so if we if we look at a scenario like as we have discussed say i am sitting in this room we have say three four such studios in this particular ah floor so every if every studio may be may be maintaining a temperature with some 10 to 20 sensor so overall we may if there are four studios and there are 40 sensors which are sending data to the cloud cloud analyzing those taking a
            • 08:00 - 08:30 call whether the temperature need to be increased etcetera ah and type of things right so while sending the data it is aggregating into a intermediate node or in an intermediate switch or a router where it being transmitted at the next things or what we say it is being aggregated at the gateway so what if see the this particular application if the temperature control is the application it could have been good that if if a preliminary thing suppose everything is ok no need of transferring
            • 08:30 - 09:00 this data of the things like so if i say that my i need to maintain the temperature between 20 to 22 degree centigrade and it is between and plus minus 2 degree so if instead 18 to 24 i mean if it is within if my local node of this 10 sensor sensors of this of this room is think that it is within the range it will not transmit it will say that things are ok or it may transmit that it is within range type of things right so what what way this will help it will
            • 09:00 - 09:30 help that i i reduce this load into a much lesser thing so what we are trying to do we are trying to do the computing at a much lower level because some resources are available right it can be network things later on we can say it can be my mobile device it can be some other things which are ah say ah excess resources are available whether i can use for computing at that purpose so anyway we need to transmit to the
            • 09:30 - 10:00 cloud because the cloud may be doing a global analytics where it sees all the four rooms say for this typical synthetic case right otherwise if i am doing on a local things i only see the data of this particular room so as as cisco is having a huge infrastructure of the of the network so that may be one of their motivation ah of using that for computing purpose and it is much lower than the actual cloud which is on the little far away so it is what we termed as a fog computing so vision of
            • 10:00 - 10:30 for computing is to enable applications ah on say millions or billions of connected device to run directly on the network edge so instead of pushing to the other end of the network whether i can execute at the edge of the network itself in thus reducing the overall traffic load of the rest of the network which saves ah network delays uh or of course energy and ah
            • 10:30 - 11:00 quote unquote overall costing on the things right so just ah this is a thing what we have you have already seen i am not going detail into the things so there are ah some [Music] so called differences between this cloud and fog computing paradigm like latency typically cloud in case of a high where ah fog is lower than that or delay jitter say things like ah
            • 11:00 - 11:30 say geographically it is something some sort of a centralized cloud where it is particularly distributed and so and so forth so there are different pros cons both the things but what we and if you see this typically this comparison table ah it may appear that why cloud why not only fog type of things but we we do understand that the fog is not a replacement of the cloud right so it is more of a something what we say a
            • 11:30 - 12:00 complimentary or a supportive technology to the things which helps in better utilization of the infrastructure so cloud need to take a maybe a global call of what is going on overall ah means overall infrastructure wise and cloud can provision different type of service things which fog may not be able to do that fog has low resource mostly in comparison to the cloud and much nearer to the things and it does not have a global vicinity whereas it may
            • 12:00 - 12:30 provide better security or type of things if is if you are putting those type of resources on the things it ah supports some sort of a mobility and ah there are the latency is low and so and so forth so it makes sense say if i have some sort of a ah infrastructure which goes hand in hand right what we say ah a paradigm where we
            • 12:30 - 13:00 have cloud fork and rather we keep this age also where actually my sensors etcetera are ah capturing the data transmitting the for to the things there are little ah what we say little ah thin line between that where this age ends and fog some some of the references you will find that interchangeably frog and age thing edge computing are being interchangeably used nevertheless what we want to
            • 13:00 - 13:30 see it like that so i have a ah sensing devices right ah which can be iots and different category of things who are sensing data right and that it transmitting to the ah this fog nodes right so ah in where we refer here the edges though those type of sensors right nevertheless those sensing ah devices may still may have some ah
            • 13:30 - 14:00 some sort of a ah intelligent or may have some resources by doing some some processing there like ah typically ah reducing some trivial noises so that ah the more clean data being transmitted and so and so forth so what what is the thing that bringing intelligence down the ah cloud close to the ground so we have this cloud up in the ah
            • 14:00 - 14:30 so called up in the quote unquote sky and then bringing this intelligence down the down the towards the surface right so that some of the decent can be taken as a ah at a much lower level the challenges are there that as these are independent things so they do not have a global view so as such the cloud can still take a call of that the data or what we say ah this process data which is being transmitted
            • 14:30 - 15:00 but in in doing this intermediate layer what we achieve that reduce the overall traffic which are being pumped to the cloud right we will later on see there is another concept or another [Music] technology or approach what we say due computing will will come later on right so this ah like cellular base station network routers ah
            • 15:00 - 15:30 wi-fi ah gateways will be capable of running applications right so if if there are resources available so if it can be cellular based station or our say wi-fi routers which are installed um everywhere even this network routers which are highly resourceful and may be they have excess resources which can be utilized for application purposes right i can have dedicated type of things which which can
            • 15:30 - 16:00 do also as a intermediate node on the fog nodes right so end devices like sensors are able to perform basic data processing right so sensors will data processing in the sense data capturing and maybe some sort of a basic filtering like ah some sort of ah basic operation and transmit or transmit the raw data to the things right so process close to the device lowers the response time enable real time ah application so once the response time is
            • 16:00 - 16:30 is lowered which helps us in achieving ah real time applications right or realization of real time ah processing of the things so ah what we try to do in doing so that it is multi layer there are definitely lot of challenges come into play right that how how overall ah management of this data flow will be there right how much we gain in doing so all right and then if i have ah like if
            • 16:30 - 17:00 you are transmitting everything to the cloud so you are you know that how to handle ah things right you have that particular way to handle now if you have intermediate lot of devices in between which are acting as your ah intermediate layer for computing like for computing then we have ah challenges like that how ah how we can basically inter operate right the data
            • 17:00 - 17:30 being processed by different size say if if my fog nodes are not homogeneous if my fog nodes are like this that some fog nodes are say network routers or wi-fi routers and some fog nodes are maybe something say mobile device or something else then how ah how do you handle this type of interoperability and type of things there are issues of how what is the timing relationship will be there
            • 17:30 - 18:00 like some fog no sending some data at some rate some some nodes etcetera so this these are challenges there are several challenges or in other sense we need to look into that overall ah what is the overall performance whether we gain or not for all cases whether this will be good or ah in which cases we need to deploy this type of scenarios right so there are things which need to be looked into
            • 18:00 - 18:30 so if we look as a cloud fog we kept the edge type of ah paradigm so what we see the same if you see the same thing in a different form so at the ground level we have different sensory devices right primarily they are sensing it ah if we look at the medical then it may be sensing different type of things like ah typical what we say
            • 18:30 - 19:00 body area networks right so your [Music] things like say body pulse rate heart rate body temperature pressure and it may have lot of other things which can be sense right so those data are being transmitted to intermediate fog nodes so that may take some call like ah initial preprocessing of the data that can be one
            • 19:00 - 19:30 way it is looking at the set and there can be other type of things also like if if it finds that the basic analysis that a immediate alert is needed to the to be sent to the patient like you need to see a doctor or something like that that can also it can generate right so it is not not a one way communication it is a both way communication between fog and the age what we see out ah here also
            • 19:30 - 20:00 right so there are both way communications are ah there so so the fog nodes in turn accept this data do necessary processing at its ah end whatever is meant for and then this transmit this data to the cloud which is which may be a process data right so ah if it is a health related data so it may be sending a overall health metric along with the maybe the
            • 20:00 - 20:30 actual data or it may be sampling the data in some rate and type of things right like if it is a meteorological data then it is doing something else so based on the logic there whatever is there embedded in the form ah it works on that those logic and work on it and that is being transmitted there so what we gain here some of the responses are being done out here itself right at the fog level so this overall this travel time from going from these
            • 20:30 - 21:00 sensors to this cloud and coming back and giving the result that is reduced so i have a better response time right and it may helps in ah several ah real time applications right and and the fog which is doing a ah partial computation are may be transmitting the not the whole data rather a subset of the data or a process
            • 21:00 - 21:30 data to the ah cloud so in other sense it also reduces from this sector also the volume of data into the to the cloud so this also helps in overall achieving better response times low energy cost and ah bringing down the overall costing of the thing right and that it along with that ah this fog may be getting from some other data some
            • 21:30 - 22:00 other sources etcetera like if it is a data related to say agricultural related things the fog ah data may be getting some of the metrological data nearby so it may be getting other resources right or that map data may be in cloud so there are ah definitely advantage of having this layers in between which helps in what we say that ah partial computation at a much at the network edge the ah definitely the cost of all those
            • 22:00 - 22:30 things is that you need to have this fog devices right or your devices which are ah which can work as a fog um node ah while during this overall transmission thing so if we look at that cloud issues or the limitations this one is the major thing is the latency so latency is pretty high at times so as as we discussed in our previous lectures that
            • 22:30 - 23:00 it is not a ah it is not meant for high performance computing per say right so high hpc is hpc so cow cloud may not be under may not be mistaken as a hpc type of things it may provide different things like ah your high performance computing paradigm but it is per se is not hpc platform as as such not looked as a spc platform so large volume of data being generated right in as as if we here in this case some of the data sets are restricted here
            • 23:00 - 23:30 otherwise huge volume of data is generated there are which has a larger bandwidth requirement not designed for volume variety and velocity type of things like ah three vs of a big data type of concept right and generating which are being generated by the several iot devices right which is which again ah has different interoperability issues etcetera so as we are just some few minutes back discussing that
            • 23:30 - 24:00 there may be interoperability challenges when we have different fog devices ah there may those challenges may be there when we are transmitting from the iot rather i can have a fog device which fog devices which have agreed upon protocol of what need they need to transfer to the cloud in other sense it helps in ah it may ah help in ah achieving interoperability when the data set are coming from different ah fog
            • 24:00 - 24:30 sources right so if you look at the iot or that is which are here in this case h devices so there one of the major challenge is a processing you do not have that much processing power at the edge there are challenges of storage like usually these devices are low on storage like if you are using ah say ah say what we said we are used to this pulse oximeter and etcetera they are not meant to store things right so they are low on
            • 24:30 - 25:00 storage right and also they have a power requirement in several cases they require a power requirement especially when you have ah something which is on the ah on mobile mobility then you have a power requirement and fog much ah lesser latency permits uses in real time application less network congestion ah reduce cost execution at the cloud as we discussed more data location awareness like that is one part that ah
            • 25:00 - 25:30 if if one fog and if you have distributed we have a one cloud we have a distributed the sensing of room temperature across iit campus iit kharagpur campus so the fork device where it is located it knows that where the coordinate so this inherently i can have a ah speciotemporal data space time along with the other attribute data right so it is a better aware of the things otherwise you have to transmit always the gps things that is also ok but that increases the load of the data right and
            • 25:30 - 26:00 increases the turnaround time better handling of colossal data generated by the sender that is also a thing so ah we will just look into a case study ah quickly so which help me help us in understanding that why studying these two things ah or overall paradigm makes sense ah so here what we see that three sort of a layers one is the cloud at the top ah what we say level zero that is ah
            • 26:00 - 26:30 no as such hard and fast that how many level etcetera we are this is for our this particular case study and discussion then we have a isp in between and then we have the area gateway so these are level 0 1 2 and then we have maybe some mobile devices which collects the data from different sensors so this is at the level 3 right and then we have the level 4 type of things so this fellow or these fellows may act as a fog network right which take a call so
            • 26:30 - 27:00 in other sense the fork may not be a may not be a single device it may have different group of things or in other sense what we say i may have i we may have a for networks which which helps me in achieving this and the information flow both side one usually the up in the north direction the data is more going to the cloud and analyze data is better low it is less but nevertheless the information flow on the both side now this is what we did in a very
            • 27:00 - 27:30 synthetic environment in our lab and ah this there is a typical ah some of the configuration and this some of the configuration we we basically used a simulation platform primarily ah that cloud siem and ifoxin cloud sim is a from the university of melbourne iron i fox sim is a combinedly developed by a team of kharagpur iit kharagpur and university of melbourne and now they they are taking care of that tool so
            • 27:30 - 28:00 this is if we have typical infrastruc i think and if we have these different type of devices here and which have some latency so these are some of the things what we try to see which with some reference material then we try to see that how overall performance things matter right so and if we look at this particular application ah that components and flow so this something eeg signal this
            • 28:00 - 28:30 client module ah it captures sent to the current module data filtering module data processing module and even handling module different even handling so there can be a confirmative module that whether ah this events has occurred or this alert need to be generated and it goes to the client module and then it is that goes to the display or what we say informing the user or the patient to the things right
            • 28:30 - 29:00 if this is my overall four so what we have we have different modules to execute now we have a module to execute and we have couple of layers where we can say this i can execute in ah cloud this i can execute in fork layer and type of things try to see things so we did some ah experimentation again i with a with a caution that this is a very synthetic type of things it should not be generalized that this results will come everywhere but we try to see that
            • 29:00 - 29:30 though things varies when we put layers in between so this is ah again that the placement obtained by different application module for fog and cloud architecture like client module in the mobile in case of work based placement in the fog based module in the cloud based module in the mobile whereas data filter module in the area gateway ah data processing material in the area gateway and error handling ah even
            • 29:30 - 30:00 handling area gateway and conformity module goes to the cloud because cloud has the global view that whether it confirm that it is a event has occurred or not like something a medical event and maybe if you work on a traffic type your thing something even the crafting grid and this is ah in the simulation while we as i told that we use this eifoxin simulator so these are the different ah simulation platform ah we used and this is the same thing as we see that this is a cloud isp area gateway mobile
            • 30:00 - 30:30 devices and finally we have this sensors now if you look there at the performance this network uses so is very low in case of a fog architecture has only few ah like positive cases in the sense which cases need to be transmitted to the cloud is there right or or candidate gates cases are there to cases the conformity module deciding the is access right so the typically the fog
            • 30:30 - 31:00 load is low in case of cloud based architecture the uses is high as or modular on the cloud so it has go whether it is ah whether it is ah say data filtering it will go to the cloud whether it is even handling go to the cloud and definitely confirmative is going but in this case it goes to the things again i am just to keep you again again i need to mention that we are we need to ah this is a very ah lab type of experimentation ah with some
            • 31:00 - 31:30 small number of nodes and etcetera so in in different scenario things may be different but nevertheless the overall story may be same now ah cost of execution in the cloud ah because what we see that when you when you use your own infrastructure that is infrastructure why you are using because that is surplus right so that is practically no extra cost is ah involved into the things but when we go to the cloud type of thing then we have a problem of this costing of the whole
            • 31:30 - 32:00 thing right so execution cost is much higher when we in this case in the cloud and even ah similarly if we look at the latency also so in case of a frog it is more or less stable whereas enough insa after some configuration we have used different configuration as i shown you so configuration we see that this latency increases in case of a cloud similarly in energy consumption also this different type of patterns come
            • 32:00 - 32:30 into play when we look at the different category of energy from the dc that means data center that means that actually in the cloud and then mobile energy and age and it varies in different aspects so also we tried with a prototype ah like try to see in the lab that how things works with a with using some of the medical devices like as you can see
            • 32:30 - 33:00 some hand band etcetera where the data being transmitted ah and we have used raspberry pi as fog devices which and aws or also you know in-house open stack for the cloud but the in-house thing is the network delay is much less because it is within our iit kharagpur network so the delay you do not perceive but if you have external guide like aws and other things like google cloud and type of things ah you can have things like incidentally
            • 33:00 - 33:30 amazon has given iit kind of put some free credits to the students so we utilize those so use different data sets and customize to the formula for analysis like what we want to do in doing so is that we need to send a alert that when the in this case when the patient should go to a doctor or call for support etcetera now when we say that with alert then we have to have some medical
            • 33:30 - 34:00 domain analysis into the things which should run some of the things in the fog devices itself provided that this spark device has the resource to maintain run those application and some of the things should run at the cloud end right and we have tried different configuration with respect to resource allocation customizing the different physical devices and so and so forth so this what to what we in this thing what we try to looked into in this our ah first lecture
            • 34:00 - 34:30 on this ah couple of series of not serial we will have three four lectures on this we want to say that whether having this form device or cloud fog paradigm or cloud fog edge paradigm what we try to achieve is some sort of a ah better efficiency ah or better performance in time term in terms of ah delivering any any applications right in this case the
            • 34:30 - 35:00 experiment into a health application and we could see some of the performance metrics are giving better results with when we have this sort of a combined paradigm there are few references and with this ah let us end our discussion today we will continue with this discussion in our next lecture thank you