AI Regulation Challenges and Exercises

KIT+ 2 Runde AI Regulation

Estimated read time: 1:20

    AI is evolving every day. Don't fall behind.

    Join 50,000+ readers learning how to use AI in just 5 minutes daily.

    Completely free, unsubscribe at any time.

    Summary

    The KIT+ 2 Runde AI Regulation video by appliedAI Initiative covers the complex landscape of AI regulations, focusing on the AI Act and its implications for companies. The discussion highlights the importance of understanding AI systems, general-purpose AI, and the classifications under the Act, such as prohibited, high-risk, and low-risk systems. Participants engage in exercises to classify different AI systems according to these regulations. The video emphasizes the need for companies to prepare for compliance by the upcoming deadlines, such as February 2025 for prohibited systems. Key insights include understanding the roles of providers and deployers, transparency obligations, and risk management.

      Highlights

      • The AI Act introduces a risk-based approach, categorizing systems as prohibited, high-risk, limited-risk, or low-risk. ⚖️
      • Providers and deployers must navigate complex compliance frameworks depending on their AI system's classification. 📜
      • Exercises help participants determine how their AI systems are classified under the AI Act. 🏋️‍♀️
      • Transparency obligations are crucial for AI systems interacting with humans. 🤖
      • The video offers guidance on setting up organizational structures for compliance. 🏢

      Key Takeaways

      • Understanding AI system classification is crucial for compliance. 🧩
      • Prohibited systems need to be shut down by February 2025. 🚫
      • High-risk classifications require substantial compliance efforts. 🛡️
      • Transparency and disclosure are key in limited risk scenarios. 🕵️
      • AI providers and deployers have distinct obligations. 🤝

      Overview

      The appliedAI Initiative provides an engaging walkthrough of the AI Act's challenges, urging companies to understand how their systems fit into regulatory frameworks. The video underscores the importance of the Act for ensuring AI applications are safe, focusing on transforming compliance from a burden into a competitive advantage.

        With exercises designed to mimic real-world scenarios, participants are tasked with classifying AI systems to better grasp their responsibilities under the AI Act. This interactive approach educates on the potential pitfalls and obligations for AI providers and deployers, ensuring a comprehensive understanding of risk management and compliance strategies.

          The session highlights the necessity for companies to act swiftly in aligning with these regulations, especially with deadlines looming. It emphasizes transparency as a core principle, outlining best practices for ensuring AI systems meet standards without stifling innovation.

            KIT+ 2 Runde AI Regulation Transcription

            • 00:00 - 00:30 looks like everybody's fine so I will record meeting perfect so yep maybe many of you already know what apply AI is but for those of you who do not so very quickly apply AI is the largest Initiative for the application of artificial intelligence and we're based in Mich but also in h Bron so we have our office um in the pie um spaces um also we have basically been working exting since
            • 00:30 - 01:00 20177 as a brand ofum um maybe you already know what we do but basically our main vision is to support companies holistically in their transformation in their Journey so that means we support companies to develop their AI use cases and code and develop use cases from the planning phase onto production also we support companies setting up their strategy so the team structure processes and also we support companies
            • 01:00 - 01:30 on upskilling or AI literacy in general and apply is formed by a network of of Industry Partners as you can see here so we have as partners for example uh companies like BMW cens Infinium etc etc so just a couple of words about the center of excellence where I'm working so as you know now we have this new regulation the act and which is of course a lot of activities that we have to do and considerations and challenges and so on so the the goal of our Center
            • 01:30 - 02:00 of Excellence is actually to simplify this regulation and to make it hopefully as easy as possible for companies to implement so that they can use it also as a comparative advantage developing high quality AI instead of as a board and we're now in the middle of that process trying to simplify translate it to data scientist ml Engineers how they can actually operationalize or implement it in a simple way um and how we do this so we support companies at different levels so as I
            • 02:00 - 02:30 already mentioned at the use case level we support companies to basically understand the risk class their obligations that means translate this into technical requirements for a use case so they can develop it successfully and also we're not only supporting the planning as I said before also in the development so we also code or program um AI use cases um at apply Ai and also at the orc level as you probably have heard the has obligations not only at the use case level but also at the
            • 02:30 - 03:00 organizational level and all these obligations are also now embedded in our offering of our AI operating model um basically we support companies to set up their roles responsibilities team structures in a compliant way and also a literacy is the next Hot Topic we are support companies basically upskilling them how to do risk classification how to reach your obligations from a technical perspective uh we have some Rus trainings and also some risk management trainings for that perfect so maybe before moving into
            • 03:00 - 03:30 more detail I would like to now switch the focus to the audience so I prepare a very short MTI meter survey so if you can just please go to mti.com and just input this code I would like to understand also a little bit how familiar you are with the AI act in your company so also I will just put it in the chat m.com and the code is going to be this one so the code is sorry for that
            • 03:30 - 04:00 code is 5543 0 7808 5543 7808 so please let me know if you can access it so normally you should be able to access the server yeah I can see people already voting and the first question is in the scale from one to four how familiar are you with the a act
            • 04:00 - 04:30 so are you not familiar at all have you read some blog posts level two level three is you read the whole AI act and number four you are an expert on the European AI act so I can see already 12 votes um most of you just read a couple of blog posts but still there is no no one consider an expert and quite some people also are not familiar so they are starting from almost from through zero
            • 04:30 - 05:00 okay but that's that's good to know for for me so then I'll try to make my explanations also a bit more more easy and I I will try to take more time with with everybody okay so I think the patter stays after 30 people vot it that's really good and second question is based on the description of today's um Workshop what are your expectations so what are your main challenges about this topic of risk classification so I'll give you one minute so you can
            • 05:00 - 05:30 write uh what you expect for today and what your M challenges are
            • 05:30 - 06:00 okay I can see already quite some some items so many of them actually are covered today so information on how to classify that's exactly the goal of today we have developed a tree that you have to follow basically depending on your use case and based on the answers you will find what risk class you have
            • 06:00 - 06:30 what role you have and as a consequence your obligations as well somebody mentioned bet understanding so actually the three supports for that also learn about obligations yeah depending on your risk class you will have different obligations that's also covered today um risk classification classification um classification yeah all of these are covered things that are not covered today is of course gdpr that's not uh that's a different regulation So today we're focused mostly on the AI act so that's basically
            • 06:30 - 07:00 another training that we it's not for today the gdpr training um somebody mention risk handling so I guess that means more about risk management so today's goal is basically to understand your risk class your obligations and if you're high risk there is one obligation that is name exactly that risk management so what you have to do to manage your risks but that's not today's workshop's goal that's more a technical Workshop that also we offer it at apply AI but basically not in the scope for
            • 07:00 - 07:30 for today at least that's the only thing and better understanding okay do what is right but not overdoing it I like it I like that very much very great perfect so with this maybe I can just go to the agenda um for today so at the beginning I'm going to explain um briefly what is the risk-based approach because the ACT is a horizontal regulation that is proportionate to the risk of the use case so that means the more risk your use case has to health
            • 07:30 - 08:00 safety and fundamental human rights the more obligations you will have um that's basically leading to the second topic how are these risk classes defined and also somebody mentioned you would like more a practical learning on how to classify your risk class that's why we have also a risk classification exercise that we're going to show with different examples how to classified the risk classes for different use cases um after that we're going to have some overview on the implications or obligations for
            • 08:00 - 08:30 each RIS class and also how you have to be careful because if you make some small modifications in your use case maybe you are in a safe class and you might easily become one of the let's say um more complex classes so with more obligations so you have to be also very careful in different situations so any question before moving forward or starting um with the presentation
            • 08:30 - 09:00 all right then we can go to the first chapter of course this principle of proportionality is nothing new so even in different vertical regulations like the mdrs or the medical device regulation or many others um you can find this like for example thermometer is a device that cannot harm anybody so basically you have a lowrisk device here also the hearing a it has a medum risk unless basically you can of course make it so loud that people can get very
            • 09:00 - 09:30 annoyed or harm people so you might have some basically filters or limitors so the signal doesn't goes too high so but basically it's considered M risk compared to a pacemaker for example that if it doesn't work it actually could kill a person which is basically a highrisk um device uh and also in the MDR you have some devices like metal on metal hip implant which are prohibited for example so nobody should be able to install those those devices anymore so
            • 09:30 - 10:00 and the a is following something very similar to to this um in the way how apply I see this is through the compliance Journey so now that you're in the C transfer plus so I believe you are starting to develop or continue to develop some AI use cases and you want to understand if the AI act actually is important for you and you have to start in the first step on the compliance Journey we name it applicability so we are here you need to identify for your use case what is the risk class and also
            • 10:00 - 10:30 what is your role because based on these two variables you will know your obligations so it could be for example that your risk class is low risk and that means you have no obligations at all but also it could be that you don't fall into the AI act at all and if you follow this RIS classification method that we will explain you today so that that way also you have no obligations at all so it's very important to understand if the a act applies to you we name this step applicability
            • 10:30 - 11:00 of course after you know your risk class you need to identify your obligations you have to meet them so you have to set up all these organizational level structures use case level processes policies and so on to meet your obligations later you have to demonstrate that you reach the levels necessary to have to be compliant and after that you can deploy to production and you need to make sure to maintain compliance and over time by monitoring your system your performance and make sure everything stay compliant but for today so we have trainings
            • 11:00 - 11:30 actually across the whole journey but the focus of today is basically focus on the first step applicability so the second question is how the risk classes are defined so I believe many of you have seen similar diagrams um around but basically there are four risk classes the first risk class is named unacceptable risks they are basically name also prohibited there are some types of use cases like social scoring mechanisms or exploiting vulner groups like kids for example playing
            • 11:30 - 12:00 video games to buy or just to basically make them follow some actions that are not okay with fundamental human rights all these practices basically are prohibited um and that means for February 20125 so next year at the beginning of next year no company is allowed to deploy these systems anymore and if you have them deployed today
            • 12:00 - 12:30 that's the first risk class unacceptable the second risk class is named um high risk and that means depending if your use case belong to some categories that are highlighted in the annex one and Annex three of the AI act for example suchal recruiting education critical infrastructure and so on and I will show you more details later basically your use case um it belongs to a high risk category and you will have a large amount of obligations because these cases can basically harm um health
            • 12:30 - 13:00 safety and fundamental human rights of people the next um case is named limited risk or transparency risk so basically when you have an i system interacting with a human you need to have some transparency obligations to make sure that the human knows it's not a an another person but is actually an AI system with him it's interactive way for example chat boss Dynamic advertisement and other things uh in that direction
            • 13:00 - 13:30 um of course we put here the Box not mutally exclusive because you could have a system that is limited risk but also at the same time it can be high risk so that's very important to know it's not one or the other you can be both of these um classes at the same time and the last one so in case you are not prohibited you are not high risk and you are not limited risk then by default you will be considered a low risk and class and that means you have no obligations at all um you basically just need to
            • 13:30 - 14:00 follow your voluntary code of conduct so for example different companies are starting to write it down it's just melops best practices but it's just voluntary so you you are not obligated to follow any any best practice but it's just a recommendation from the European Union to follow a voluntary code of conduct and the shape of the pyramid basically it has a shape of a pyramid because the EU expects not too many use cases to be prohibited or high risk but they expect most of the use cases to be
            • 14:00 - 14:30 actually on low risk any question or does it make sense any question about this diagram all right then also there is another dimension that you have to be aware of of is when you use general purpose AI models so in the case of general purpose AI models there are two scenarios so you you can have general purpose AI models with systemic risk that means any model
            • 14:30 - 15:00 that is using more than 10 to the power of 25 flops or basically floating Point operations that means these models will are considered very powerful and you will have a large amount of obligations to to do for example risk management redeeming cyber security energy consumption copyright obligations transparency about training data and so on so these are examples for example could be be um the models that gp4 is
            • 15:00 - 15:30 using they are considered models with systemic risk but this only is relevant basically for very very big companies I don't think in kite plus you will have a a GPA model with systemic risk um also the second case when you are below the threshold you are considered a GPA model basically without systemic risk and there are two cases so the case one is the commercial and the case two is open source so when you are commercial Al so you have to give information to the I
            • 15:30 - 16:00 office and team providers about your technical documentation and in case you're open source you only need to basically provide copyright obligations and transparency about training data so the purpose of the EU is like if you have an open source model basically you have only these two obligations but if you have propietary you have to add these two other obligations and if you have systemic risk you have to add all these um basically set of obligations in these paragraphs in that way they try
            • 16:00 - 16:30 not to harm too much open source models um because their obligations basically are are also very very small any question about the GPA model um pyramid all right um so now that we know the risk class also something very important to know is your role so in the a act there are different roles but the two most important ones are the role of the provider of the deployer so the
            • 16:30 - 17:00 provider means any natural or legal person that develops an AI system or general purpose AI model um and they place them into the market or put them into service um under their own name or trademark so basically let's say if apply AI is now developing an AI model for to forecast the weather so we are developing the model so we're the provider and and also if we deploy it in production our elves we will be the
            • 17:00 - 17:30 deployer so we will be both but if apply AI develops the model and I don't know we sell it to Infinium and Infinium deploys it only that means then apply AI is the provider and Infinium is the deployer this is basically what it means so deployer is any natural or legal person I'm using an AI system under its authori so in this case Infinium they are just they're not training the model they just bought it from us they are just using it under their own Authority so hopefully this this concept
            • 17:30 - 18:00 is is clear for you and also typical questions we get is okay so we saw this pyramid but do you think this pyramid actually makes sense like they expect not too many highrisk use cases so we had this questions as well and we tried to make a study um also with the T Kaiser slon um from our perspective um the numbers that the EU believe so they believe only five to 15% use cases might be high risk we got a bit of a different
            • 18:00 - 18:30 um basically insights so for us we got around 33 up to 50% of use cases in our story study sorry were were high risk and the University of two Ka slon they believe 31% of use cases are high risk so somehow we are a bit challenging the numbers of the EU we believe there might be more than 15% um but also I believe maybe this 50 is also our over bound so that might be
            • 18:30 - 19:00 approximately based on our estimates 30% but of course our studies were focused on a particular vertical and some only with startups so in big companies these numbers also might look a bit a bit different so we we need to take these results with a with a gr salt but we believe 15% still is a very small number so it's going to be quite likely much higher than than 15% okay so I think that's all for the theory for now so it's just three lessons for you so number one there are
            • 19:00 - 19:30 four risk classes so prohibited high risk limited and low risk for AI systems and in the case of general purpose AI models you can have systemic risk that means you have gp4 you will have a lot of obligations there but also you can be without systemic risk and here you have two cases propi models they have a couple of obligations or open source models they have only one small obligation so that that's all the constellation of
            • 19:30 - 20:00 use cases and something very important to know you could be a general purpose AI model but also at the same time belong to one of the four risk classes in that case all your obligations add up so your final obligations is the sum of both sets of obligations um together okay so any question about the risk classific risk classes general purpose AI models or roles um about according to the AI Act
            • 20:00 - 20:30 okay so in case we don't have questions so I think we can move directly to to the mod board so at apply what we did is we basically try to simplify the a into trees so it's easier for us to understand basically what is your risk class for your use case depending on different um characteristics of your use this case so maybe just to show you an
            • 20:30 - 21:00 overview of the method so for us we have six steps in total to understand your risk class role and obligations step number one is the precondition so we want to understand if your system is actually an AI system based on the UA Act definition or is it a general purpose AI system or it's not an AI system at all for example rule based systems which in that case you have no obligations at all so you can stop the process and basically you have nothing
            • 21:00 - 21:30 to do with the AI act you're not an AI system the second step is is a system AI or general purpose AI in the scope of the AI act so for example there is a very important article on the AI act that basically excludes use cases from it so an example is if you are just developing an use case for personal purpose let's say I'm Alex I'm developing a use case even though it's a really bad use case that could discriminate people but I only use it myself at home I develop it at home I
            • 21:30 - 22:00 use it at home then I have no obligations with the a because I use it for personal use another example is when you use this only as an exploration inside your company let's say I want to make a system even though it could discriminate let's say evaluate candidates and so on but I never use it with people like in real work conditions but I just use it just to explore if it's possible or not in my company also I don't fall into the AI act that mean the only way to fall into the ACT is
            • 22:00 - 22:30 that the system needs to be deployed into production and real users have to use it which also makes makes a lot of sense and there are many other cases that I will explain um during the workshop the next one is also is a system prohibited so as I mentioned before in the pyramid there is a set of conditions that we will explain in the workshop um about eight conditions where your system can be prohibited like subliminal techniques social scoring and many others so if you fall into those
            • 22:30 - 23:00 cases then you are prohibited and as a result you have to stop your system from the market and shut it down as soon as possible otherwise you will have a really big big fine in your company a step four is a system high risk so depending if you belong to a set of categories in Annex one and Annex three your AI system will follow into the category of high risk and you will have quite a lot of obligations uh to comply but something important is there
            • 23:00 - 23:30 are also a lot of exceptions in in this step four that's why the Tre is actually very complex and we have to navigate it together from case to case so this is just an example of the tree trees depending on the vertical where you fall you will have to follow different workflows to understand if you fall into the exceptions or not so step four is actually the the most complex of all the steps on the on the AI act then the next step is step five is in in case you're interacting with humans do transparency obligations apply
            • 23:30 - 24:00 so also we have a tree for that one and the last step is understand your role so if you are provider if you're deployer you will have different obligations that's why it's very important to understand in detail your role in the act any question about the overview for today all right so if we have questions I think then we can move directly now to
            • 24:00 - 24:30 the first um Step so I will share with you this link to the m board so you can access it so anybody with the link should be able to edit the board I'll just put it here in the chat please just access the motherboard and I we will continue once you are there
            • 24:30 - 25:00 I can see 20 people joining already so let's wait a couple of seconds for the rest amazing we have now 32 people so I will just go through the instructions so basically in this Mod
            • 25:00 - 25:30 War what we did we have all these six steps but basically broken down into threes so you can see the first frame is a step one this is an system general purpose a system the step two is arent the scope of the AI act a step three are you prohibited step four are you high risk step five do transparency obligations apply and step six six sorry what is your uh role according to the AI act so we have prepared also exercise for each of the steps and we're going to
            • 25:30 - 26:00 start with the first step step number one so maybe before that instructions maybe let's just together read them so we have different boxes in each of the steps the red boxes are questions basically that we the is asking you and each question is accompanied by an ID and you can use these IDs to find further information and context about your questions in the document that is in every frame so you see this used the guidance document here you can find more
            • 26:00 - 26:30 information if you scroll through the document um in case you need it um the green boxes correspond to choices and the purple boxes are Benedict so for example so here you can see this is a question the red box the green boxes is an option so option one option two option three and the purple boxes is a very you are not an i system you are an i system you are a general purpose and system so let's get started step number
            • 26:30 - 27:00 one is is a system an AI system on a general purpose a system so the the thing you have to ask is how is the system designed basically so if your system is designed to use rules written Solly by humans by humans for example I want to see if it's warm and hot and I just put a threshold if the temperature is above 20 H 30° is hot that's a system developed by humans only with r to so I didn't have to train anything in that
            • 27:00 - 27:30 case I'm not an AI system on the other hand if the system is a machine based system designed to operate with different levels of autonomies with adaptiveness it can infers um basically outputs based on inputs um also you are using different ml approaches that learn from new data and you have logic and knowledge about approaches and you generate outputs basically you are an AI system for
            • 27:30 - 28:00 example a logistic regression a linear regression they are AI systems um also a decision Tre a network all of this are examples of of AI systems and the last one is the system is based on a general purpose AI models model and has a capability to serve a variety of purposes either via direct use or via integration in other systems so here the important word is a general
            • 28:00 - 28:30 AI purpose AI model that serves a variety of purposes so for example if when you use a linear regression for for example or your goal is just to predict the next set a point based on the past but for a general purpose AI model you have different purposes right so you could use chat GPT for example to ask for the average weather on on February in Germany or the average weather in December in Germany so or also you can ask a different question like what is
            • 28:30 - 29:00 the uh about or what is the definition of a high risk class so that means you can use that those models for different purposes basically in that case it's a general purpose and AI system so now if you follow me I go just a bit down we have three exercises to do so the first one is just an example use case number one we have an LM power chatbot regist as a medical device under the MDR
            • 29:00 - 29:30 medical device regulation and this system provides diagnostic support to patients with physical disabilities so it provides support to patients to with disabilities and is an llm power chatbot so do you think this system is not an a system is it an a system but it is a general purpose system so this is a question that I would like you to answer and for that also before I'm going to present use
            • 29:30 - 30:00 case two and use case three already the Second Use case is um uses a k Ned Network nebor algorithm to detect anomal anomalous temperatures to monitor electrical components in a factory so K&N in case you don't know it is a supervised learning machine learning technique that is used for classification or regression problems depending on on on how you configure it so the the question is the same is this an system not an system or a general
            • 30:00 - 30:30 purpose system and the use case three is a camera scans a person body's temperature in real time and transmit this information to a database same question is it not an AI system is it a AI system or is a general purpose AI system so for this exercise we're going to give you a couple of minutes maybe two minutes so that you can vote only one option per use case please and then we're going to analyze the results and I
            • 30:30 - 31:00 we going to hand over to one of you to just tell us what's your opinion or what you believe is a right answer so see you in one or two minutes so you can vote please just press join voting and then you can select the the right answer for each of the use cases
            • 31:00 - 31:30 e e
            • 31:30 - 32:00 30 seconds
            • 32:00 - 32:30 left is
            • 32:30 - 33:00 all right so I think we got some clear winners for the different examples maybe let's go with the use case number one so looks like a lot of people believe it's
            • 33:00 - 33:30 only Ani system um maybe anybody from this 13 votes can just explain why you think it's just an i system so we have 40 people so just one of the
            • 33:30 - 34:00 13 that voted please just describe us also in German in case you're not comfortable with English is also fine so please go ahead any anybody of the 13 just explain your your your answer okay I believe this is just an AI system because it is actually using an AI it learns but but it's very focused on a specific use case so it's not
            • 34:00 - 34:30 General that's why I would not distinguish it at a general purpose one yes that's also a very good argument very interesting also I think there is not enough information describing how General this a system is that's I think that's a very very valid valid answer maybe anybody from the nine who answer general purpose AI would you like to tell us um your arguments
            • 34:30 - 35:00 we now got nine people voting for general purpose AI some why do do you think it's a general purpose AI model or system maybe maybe I can answer so I thought it's a general purpose AI model because um yes it's an AI system but
            • 35:00 - 35:30 it's uh the purpose is not really specified just di diagnostic support it's such a wide range of questions and answers you could generate with the system so I think it's a general purpose AI model yes I think both of you are right so the problem is that we didn't specify too many details in in the in the exercise so I understand the first colleague argument maybe it says support on something very narrow in that case it
            • 35:30 - 36:00 would be an AI system but as you said the last person um last colleag is if it's more broad because of course maybe you can uh have diagnostic support within many things right like in the medicine field you can have a lot of items there in that case it would be a general prop of the ey system so basically both of you are right actually we are wrong because we didn't just put enough details basically there it could be one or the other and depending on the details of the description but that really good very very good good answers
            • 36:00 - 36:30 and the second one looks like a bit more clear so AI system seems to be the winner who would like to explain why we have 13 voters so we just need one to
            • 36:30 - 37:00 explain why it's an a system which most likely is correct oh somebody wrote they cannot unmute Stefan you can go ahead yeah I I think it's an AI system
            • 37:00 - 37:30 because of the KNN uh algorithm uh because it's an AI algorithm and it's not a general purpose um AI system exactly because you one of the machine learning approaches and also it's not a general purpose AI system that that method completely agree with that so actually yeah it's your answer is is correct thanks a lot Stefan and the last one everybody Lees is not an AI system so just one one colleague please let us
            • 37:30 - 38:00 know why that Stan go ahead yeah there's no uh learning um in this case it's uh just uh yeah seems to be a normal algorithm uh yeah take I like it there is no learning basically
            • 38:00 - 38:30 so no ml approaches no logic and knowledge you're are just forwarding information yep now very good so hopefully now I think it's it's very clear I think just for the first one so Our intention was to make a GPA but I agree the description is not precise enough so we should have put it more you can serve multiple purposes inside this diagnostic support so I think both of you were correct in your answers here but thanks a lot and that's the first step basically you need to understand if your system is an AI system general
            • 38:30 - 39:00 purpose system or not in the third case if you are not an AI system or GPA system is your lucky day so you're not in the scope of the AI act so you can stop now the whole Workshop you can go back to work you have no obligations so that's actually easier for you now comes the Second Step so the next step is also you need to understand if your system is in the scope of the AI act or or not and there are two branches here are important so the first one is a
            • 39:00 - 39:30 provider of the AI system placing it on the market or putting it into service in the EU so if yes if you basically place it into the market or putting into service in the EU you might be basically in the scope of the ACT but you have some exceptions and then the second question is do any of the following exceptions apply I just read them very briefly so the
            • 39:30 - 40:00 system is being place in the market um or use exclusively for military defense or national security purposes if so then you not in the scope of the act because the military they have their own regulations and so they are not falling into the AI act also if you use it by public authorities in third countries within a framework of international cooperation or agreements for law enforcement within the union with one or
            • 40:00 - 40:30 more member states if you use it in this situation then you're also not in the scope of the AI act next one the AI systems and models including their output are being develop and put into service for the sole purpose of scientific research and development so that means if I train a model only to write a a paper a scientific paper or to do research without deploying it to production then you are also not in the scope of the act so in that way all the
            • 40:30 - 41:00 innovations that happen in Laboratories are not affected by the a so you can keep researching trying different um approaches creating Innovative approaches and so on ideally they are not basically they're not affected At All by the act so at least this component of innovation it's it's okay so it's not impacted next one is your research testing and develop research testing and development activity regarding a assisting or models is being conducted
            • 41:00 - 41:30 prior to being placed on the market or put into service and is a system being testing real world conditions so if you basically are researching developing but you test the system in real word condition so that means you have users like humans basically using your system then you are in the scope of the AI act but if you just keep it in your laboratory researching testing developing without basically real work conditions then you are not in the scope
            • 41:30 - 42:00 of the AI act the next one is the one I mentioned if you use it by natural person for a personal nonprofessional activity let's say I'm Alex at home I develop this model and I use it only for myself I am also not in the scope of the AI Act and the next one is the AI system is being released under free and open- Source licenses and is the AI system being placed on the market as a high-risk AI system or a system that's prohibited or where transparency obligations apply so
            • 42:00 - 42:30 if any of these are true then you're in the scope but if any of them if all of them sorry no then you are not in the scope of the AI act and if none of above apply then you are in the scope of the a act so these are all the exceptions that you have and also there is a second Branch so is the provider of the AI system um in the EU so in the first case was yes the provider is in the EU the second case is not the provider is not
            • 42:30 - 43:00 in the EU however we have a second question to ask do either of these situations apply so the deployer is established in the union so that means if the provider is outside the EU let's say it's in China but the deployer so the user of thei system is being used in the European Union then you are in the scope of the AI act regardless if the system was developed in America or in China so that means these companies also
            • 43:00 - 43:30 in America and China need to comply with the AI act so in that sense basically German companies and American companies they have the same conditions if they want to sell their products in the European Union next one is the output of the system is intended to be used in the union then if yes Jo the scope that means you can have this model in the USA maybe you use it in in the USA you create a I don't know database with all the outputs for predictions and then you
            • 43:30 - 44:00 take this this paper document database and you use it in the EU if that's the case also you are in the scope of the a a so this work around doesn't help you um at all and if not of both none of both then you are not in the scope of the act so any question about being in the scope of the ACT based on the location of Provider deployers or based on the
            • 44:00 - 44:30 exceptions okay hopefully now we can make it a bit more simpler with the examples so we're going to have now four examples um ideally when you answer it please just tell us which of these exceptions apply or what of the trees help you to to answer the question so the first use case the similar we have this um the system is an llm power
            • 44:30 - 45:00 chatbot register as a medical device which provide diagnostic support to patients with physical disabilities and please very important assume this is a general purpose AI model which provides let's say extensive diagnostic support for different questions from different diseases for example um for different conditions as well so the question would be is a system in the scope scope of the AI act the second one we have a car
            • 45:00 - 45:30 manufacturer is exploring how to integrate AI based navigation into a factory card is this use case in the scope of the AI act yes or no the next one the European Space Agency deploys an AI system for the purpose of studing new weather patterns as part of their investigation Into Climate Change is this or not into the scope of the AI Act and the
            • 45:30 - 46:00 last one a pharmaceutical company develops an AI system for quality assurance in Munich but they deploy the system only in China so is the system in the scope of the AI act or not do you have any question about these use cases or the Tre
            • 46:00 - 46:30 okay so in case you have no questions then let's proceed to the exercise we also give you two minutes um you have to answer yes or no for each of the use cases maybe I'll give you one more minute because you have to also take a look to the tree and get familiar with it and then we will see what your answers are so see you in three minutes don't forget to press join voting
            • 46:30 - 47:00 um so maybe one question for L ludgar Stefan you mentioned you cannot unmute are you following the same steps like the way how I do it is basically I have this option in the menu
            • 47:00 - 47:30 to mute or mute myself do you have this bar or what's the problem exactly maybe you can just describe it in that way we try to to help you please describe it in the chat and I'll try to help you if it's possible
            • 47:30 - 48:00 e
            • 48:00 - 48:30 e
            • 48:30 - 49:00 e e
            • 49:00 - 49:30 all right so time is up and we got a
            • 49:30 - 50:00 clear pattern in the first one and really mixed opinions in the second and third one that's very interesting and also clear pattern on the last one okay
            • 50:00 - 50:30 so let's go one by one so 20 people agreed that the first one it's in the scope of the a who would like to tell us why so your answer is correct by the way so but just just tell us why you think it's it's in the scope
            • 50:30 - 51:00 anybody it's used for personal data sensitive
            • 51:00 - 51:30 data of patients exactly exactly and also it's as you said basically it's already being used in production with in real world conditions so that's why it it is and we didn't write it is deployed in the EU by the way that's something which should have written there but yes assume that this is deploy in Germany and being used in Germany so yeah your answer is correct correct because it's being used with real people already so very good thanks
            • 51:30 - 52:00 a lot then we have the use case two um a card manufacturer is exploring how to integrate AI based navigation into a factory card here the keyword is just exploring so who would like to answer this question anybody from the 22 people so I think it's because of the article two paragraph 8 because it's research and testing for development
            • 52:00 - 52:30 activity and it's not really tested in real world conditions so that's why I think it's not in scope MH technically I would um add up the same paragraph but I thought about um when they are exploring it should be based on the real world because I don't assume that you can use data without exploring for example something like camera on the car so I
            • 52:30 - 53:00 thought this is why it's uh being part of the um yeah AI act yeah so so good good good observation we typically get this question so about the definition of real world conditions is is not for the training part so normally of course you have to train with real world data that somehow has collected in the past but it's more about are you deploying it and testing deployment with real users real people real humans that's what they
            • 53:00 - 53:30 meant by real work conditions in the AI act so in that sense is what the first colleague said um you're not testing it in real work conditions because you're just exploring the use case right so because of that is it's not in the scope of the ACT um but that's was a very good observation so it's a a typical question we we always get so yeah thanks a lot for for sharing it so answer is is no basically what about the next one the European Space
            • 53:30 - 54:00 Agency deploys an system for the purpose of studying new weather patterns as part of their investigation we have here 5050 anybody who wants to explain their answer okay I think it's not in scope because it's developed only for scientific research and development exactly exactly 100% so I would also say it's not in the scope of the ACT you're right thanks a lot very good very good I
            • 54:00 - 54:30 think you're learning actually the method I'm very happy to to hear that that's great and the last one I think is the easiest one so we have a pharmaceutical company developing an a system for QA in Munich and we deploy the system only in China are we in the scope yes or no I think it's not in the scope because it's not based in Europe or yes but only China exactly the deployer is in China
            • 54:30 - 55:00 outside the EU that's why we are not in the scope which sounds very everybody was at the beginning super confused like why if it even though it's developed here in Munich it's not in the scope and that's basically in the ACT what they say is more the important one is a deployer is a deployer in the EU is a deployer in China outside the EU then you're not in the scope of the ACT but thank you very much that was a really really great exercise so the answers were correct that's very very great to hear any other question about the step
            • 55:00 - 55:30 two um being in the scope of the act so I think now with these two steps if you know if your AI system is actually AI GPA or not Ai and also if you know if you're on the scope of the a or not you can discard a large amount of use cases um from the AI act already only by knowing these two steps steps so if you have a lot of use cases belonging to these conditions that we describ
            • 55:30 - 56:00 already you will know which of the use use cases you have to analyze further and which one is not and the idea why we put this at the beginning is because instead of doing the whole steps for every use case which is very time consuming here we can already filter out many use cases and in step one and also in the step two so I think now you have a good starting point um so to know when to evaluate the risk as of the use cases perfect so yep do you think you want
            • 56:00 - 56:30 some two breaks today or only one longer break what would you prefer so we have two options option one only uh one break every hour so basically we have two breaks and option two one break in the middle a bit longer I'm very happy with both options maybe react so somebody thr two breaks two short breaks please okay perfect then let's do two breaks so then let's
            • 56:30 - 57:00 make a break of 10 minutes and let's see you back in at 1010 to continue with the step three and four see you in 10 minutes
            • 57:00 - 57:30 e
            • 57:30 - 58:00 e
            • 58:00 - 58:30 e
            • 58:30 - 59:00 e
            • 59:00 - 59:30 e
            • 59:30 - 60:00 e
            • 60:00 - 60:30 e
            • 60:30 - 61:00 e
            • 61:00 - 61:30 e
            • 61:30 - 62:00 e
            • 62:00 - 62:30 e
            • 62:30 - 63:00 e
            • 63:00 - 63:30 e
            • 63:30 - 64:00 e
            • 64:00 - 64:30 e
            • 64:30 - 65:00 e e
            • 65:00 - 65:30 all right we can slowly come back please
            • 65:30 - 66:00 react when you are back so that we know we can continue okay so before continuing we have also
            • 66:00 - 66:30 another question in your mentimeter survey so please go again to m.com and use this code and the next question is based on the risk class defination we we saw at the beginning How likely is that your team or your company has at least one AI system that is prohibited high risk limited risk low risk I'll give you one minute just to
            • 66:30 - 67:00 vote um then we can continue with the step three for today
            • 67:00 - 67:30 so we actually can see a pyramid distribution looks fairly accurate so far which is good um nevertheless at the end of today of course we also will ask this question just to see if the pyramid still is there or not anymore so we have 20 more seconds to vote and then we
            • 67:30 - 68:00 continue all right so then we can continue so I'm sharing my screen just let me know in case it's not working because sometimes it happened the past so now we're going to go to the step number three so because we know we are in the scope of the act because we have
            • 68:00 - 68:30 an AI system or a general purpose system and the step three is to identify if our system is actually prohibited because if it is then we have to shut it down by February 2025 um here are the details of the step three so the question is do any of the following techniques or practices describe your AI system or its ined purpose um so basically if your AI
            • 68:30 - 69:00 system deploys subliminal techniques or purposely man manipulative or deceptive techniques then you are prohibited so liar techniques I'm not sure if you're familiar with but add techniques that are not um able to be identify consciously by yourself for example an audio with a frequency that's very low that the human ear can actually hear but maybe your subconscious can actually hear it Without You realizing it the second one an system that exploits the vulner
            • 69:00 - 69:30 vulnerabilities of a person or a group for example kids or elderly people or people with a bad emotional state and the next one a systems use for social scoring so basically giving you a score based on your behavior in some areas and using this against your advantage in other areas the next one the a system makes risk assessment of a natural person based just on the profile
            • 69:30 - 70:00 information of you the next one the system creates or expand facial recognition databases through the mass scraping of internet or CCTV and data also if the AI system infers the emotional state of a person in the workplace or educational institutions are also prohibited and if the AI system basically uses biome categorization then also it should be prohibited and finally if the AI system
            • 70:00 - 70:30 uses real time remote biometric identification in public spaces you should also be a prohibited um system so in case you want more details we have also this document with more insights from the recitals of the AI act um but basically what we did is we just summarize that document into these trees so you can see here for example we have the first one uh subliminal techniques the a system deploys subliminal
            • 70:30 - 71:00 techniques Beyond a person's Consciousness or purposefully manipulative or deceptic techniques with int intention or effect of distorting a person's behavior in a manner that causes or is likely to cause significant harm so as a followup question if the system is being used for lawful practices in the context of Medical Treatments because also you can use this for Medical Treatments um then if you use it for
            • 71:00 - 71:30 Medical Treatments it's not prohibited but otherwise basically subimal techniques is completely prohibited in the EU next one also if you want to exploit vulnerabilities so if the system exploits vulnerabilities of a person with intention or effect of distorting their behavior in a way that causes harm then you have to ask if that's the case you have to ask the second question is the I system being used for lawful
            • 71:30 - 72:00 practices in the context of medical treatment also the same question as before um in case you use this for medical treatment then you're not prohibited but otherwise um when you exploit vulnerabilities then you are prohibited of course let's say you are in a um a hospital of elderly people maybe these techniques are helpful for Medical Treatments but if you are Beyond medical treatment of course if you want to exploit a kit and just in a toy to
            • 72:00 - 72:30 buy for example something then that's definitely of course a prohibited um system or if somebody's in a bad State um mentally and you want to exploit this person with targeted advertisement to lose weight for example that's also something that is not allowed in the new regulation next topic is social scoring so is the system is used for the evaluation or classification of natural persons
            • 72:30 - 73:00 um over a certain period of time based on their social behavior or known inferred or predicted personal characteristics you have to ask a second question does a social score leads to either or both of the following detrimental or unfavorable treatment of a natural person in Social context that are unrelated to the context where the data was collected so if you have data collected I don't know you are going to some places in in the streets and they
            • 73:00 - 73:30 kind of assume what kind of personality you have and how they apply this in other context like the price determining the price for your insurance um Fe for example there of course this is not allowed and this system is prohibited um also if you have a detrimental or unfavorable treatment um of a certain natural person thereof that unjustify or dispropor disproportionate to the social behavior
            • 73:30 - 74:00 or his gravity is also prohibited and if it's n of a vote then also then it's not prohibited that's very important you have an exception to to scape it next one risk assessment is a system used by law enforcement to make risk assessment of of people to assess the possibility of committing or recommitting a crime based solely on profiling information of a natural person then if that's the case you have to ask the second question is a i system used to support the human assessment of
            • 74:00 - 74:30 the involvement of a person so if you are just supporting a human let's say a police officer that at the end this person will make the final decision then you are not prohibited but if there is no human in the loop then it's prohibited because the system can make false positives and you can basically uh misclassified innocent people based on some potential yeah ethnical gender age uh
            • 74:30 - 75:00 information next one facial recognition database the AI system creates or expand facial recognition databases through the untargeted scrapping of facial images from the internet that's also forbid it also if the system wants to recognize emotions of a person in the workplace or universities or educational institutions for example how engaged is this person or how good is working working how diligent or fly sick this person is that's prohibited in the workplace also
            • 75:00 - 75:30 the universities um in that case you have to ask the next question is the AI system intended to be put in place or on the market for medical safety reasons if that's the case then you escape it so you're not prohibited but if it's not for medical or safety reasons then it should be prohibited um we have two more the next one is biometrical categorization so if the AI system uses biometric categorization for this specific purpose
            • 75:30 - 76:00 of categorizing People based on their biometric data to infer the race political opinions religious philosophical beliefs sex life or sexual orientation then you might be prohibited but also you have some exceptions if this biometric categorization system is just a supporting feature so it's not the main component it's just a ancillary feature intrinsically linked
            • 76:00 - 76:30 to another Commercial Services then you're not prohibited or if the AI system but you might be most likely high risk that's something we will see in step four so you are not prohibited but as it's very dangerous you also fall into a highrisk scenario also if the AI system is being used for labeling or or filtering biometric data sets such as IM based on biometric data or the categorization of biometric data then
            • 76:30 - 77:00 you are not prohibited and if none of the both then you are a prohibited an AI system and the last one is realtime biometric identification so if the AI system uses realtime remote biometric identification in public spaces for the purpose of Law enforement and is the system is used for one of the following purposes below these are deceptions you on fall into prohibit it otherwise you are prohibiting and the exceptions are the targeted surch of victims of trafficking
            • 77:00 - 77:30 in human beings sexual exploitation of course these are exceptions they General prohibited or the prevention of an imminent threat for the life or physical safety of people also or a terrorist attack you are also not prohibited or localization of a person suspected of committing a crime or criminal offense then you're also escaping and the prohibited um situation for real time and if don't vote then you are a not
            • 77:30 - 78:00 prohibited system in general so any question about all these uh cases for eight cases of prohibited systems so hopefully none of your companies is developing a system like this but in case you are please make sure to shut it down before February 2025 so that you don't get really big fines uh next year um now we have the exercises so the use case one is the chat bot that we
            • 78:00 - 78:30 always have so same example one as always the use case two is anr company that uses a candidate video and voice data to determine the suitability for an open job role and among more other things the system infers the gender of the candidate and further uses it as input for its evaluation and the next case is a children's toy with an embedded AI enabled Voice Assistant encourages the
            • 78:30 - 79:00 user to make additional purchases from the company website so I will give you as usual two minutes um three votes so that you tell us if it's um prohibited system or if it's not then see you in 2 minutes
            • 79:00 - 79:30 e
            • 79:30 - 80:00 e
            • 80:00 - 80:30 e e
            • 80:30 - 81:00 okay so the time is up and we can see
            • 81:00 - 81:30 clear patterns in all the questions that's really good so let's go with use case one who wants to tell us why it's not prohibited
            • 81:30 - 82:00 it's medical application that benefit health and human life purposes exactly and is not in the eight in the least of eight prohibited systems absolutely that's right thanks a lot um use case two everybody believes is a j is prohibited who wants to tell
            • 82:00 - 82:30 us is it kind of social profiling so if the voice on data was collected
            • 82:30 - 83:00 let's say in a different situation I don't know you have some phone calls for a different purpose then then yes so let's say you have a phone call with the c r or anybody heard that in Munich but now they're using that information to evaluate you for that job then you're using it from one context into another one that is against you so yeah in that case it's
            • 83:00 - 83:30 prohibited anybody else who wants to add okay then if that's the case based on that situation that you use data from a different situation context application and now you are
            • 83:30 - 84:00 detrimentally basically a person in another context like giving your credit or now applying for jobs and so then that's a prohibited system who wants to tell us about the last one a children's toy has embedded Ai and encourages the children to buy new things from the website that's an easy
            • 84:00 - 84:30 one I have it in my screen the answer who wants to tell us why okay I'm going to put it again um it's definitely um subliminal messages uh messages to exploit vulnerabilities exactly exactly so yeah
            • 84:30 - 85:00 I will have thought about the second one but also depending on how they do it it could be a subliminal technique as well um but basically you're leading the child to they exploiting basically vulnerabilities of a person with intention of influence their behavior and they could buy for example something bad also so that's definitely number two exploitation of abilities no very cool that's great so maybe just raise your hand if you think you have in your
            • 85:00 - 85:30 company or you will have may have a prohibited use case please raise your hand and maybe we can discuss it together in that use case so I'll give you maybe 10 seconds if nobody raise a hand it means nobody has prohibited which is good so in case you believe you might have one just raise your hand and we can discuss it together okay now looks good and then basically
            • 85:30 - 86:00 in the step three so it's very important to double check if you fall into these situations and if not then all good if you do don't forget you need to shut it down by February 2025 perfect then the next step is to determine actually if your AI system is a highrisk um system and for that the AI act basically has two cases so the case number one it's basically we name it
            • 86:00 - 86:30 Annex one because it's defined in the anex one of the a Act of the a act basically if your system is a safety component of a product that means it's an AI feature that is used for safety of a machine for example or safety of a toy or whatever of the product so if your system is a safety component of a product um or is a product regulated by the legislation listed in Annex one and on
            • 86:30 - 87:00 top of that which actually I will explain one second this is the anex one um directive legislation and also if your system has to undergo a third party Conformity assessment on that legislation then you are high risk otherwise you are not high risk so what I want to say is if your system is a product or safety component that belongs to any of these categories basically a machine a toy an elevator a
            • 87:00 - 87:30 recreational craft like a motor water support device or or or motorbike water motorbike I don't know exactly the the real name um radio equipment pressure equipment cableways or Gondola installations personal protective equipment appliances that burn gas to cook for example or for heating reasons or medical devices or vro diagnostic devices plus civil aviation security products two or three wheel or four
            • 87:30 - 88:00 wheels Vehicles agricultural or and Forestry vehicles or tractors Marine equipment Railway systems motor vehicles under trailers or civil aviation then basically I have the pictures for all of the system if your system basically is close to these pictures that I I have here and in that legislation that they are governed they have to do a Conformity assessment then your system
            • 88:00 - 88:30 is high risk otherwise you are not high risk according to the annex one any question about it and here's the definition article three safety component of a product or system means a component of a product or of a system which fulfills a safety function for that product or system or the failure or malfunctioning of which
            • 88:30 - 89:00 endangers the health and safety of persons or properties so if that's the case you fall in one of these domains basically and you have a product there with a eye or a safety component of the product with a eye then you are a high res system and the second case is the annex three that means if any of the following techniques practices or domain describe your AI system then you are also high risk that
            • 89:00 - 89:30 means use cases with biometric identification critical infrastructure education employment private and public services law enforcements migration Asylum and border control administration of justice and Democratic processes so if you fall in any of these eight categories then you're also a highrisk system but as you can see in each of these boxes we have a lot of exceptions
            • 89:30 - 90:00 in the green in the last layer there are the same exceptions basically for all of them that I will mention in one minute so there are also a lot of exceptions that are important for you for you to know so maybe let's go so there are eight there are just too many I don't think we need to go through through all of them but I will just show you one or two so you understand how how this works so in the case of biometric or Biometrics based systems um is the AI system you need to
            • 90:00 - 90:30 ask a question is a a system intended to be used for any of these purposes for example remote biometric identification if yes also you need to have a second question is the AI system intended to be used for verification of your identity if it's only used for authentication basically um then basically you're not a highrisk system but otherwise you might be but then you need to ask the exceptions questions do any of the
            • 90:30 - 91:00 following exceptions apply and there are a list of exceptions that these are the same for all the eight boxes in the annex 3 so all the eight boxes have the same exceptions at the end so the first exception is is the a system intended to perform a narrow procedural task that means if an AI system is in intended to be used only for a very small component of the whole decision- making process um here the challenge of the A and also the
            • 91:00 - 91:30 a office mentioned that many many times there is no real real guidelines at the moment of what is the limit of narrow procedural task and let's say non narrow procedural task there is not a clear definition and the AI office is going to provide more guidance in the next months with examples on what that means exactly because for some people maybe just a predi ition of one value is already narrow procedural but for other people maybe depending on the context of the
            • 91:30 - 92:00 use case that is not enough so this one still has a lot of uncertainty and the AI office is going to provide more information in the next few months the next one is the AI system is intended to improve the result of a previously completed human activity so think about grammarly I'm writing a text and then grammarly just ask me if they want some AI to the writing so grammar is not really changing my meaning is just kind of reviewing grammatics or
            • 92:00 - 92:30 orthography so typically things like that basically are also accepted in the AI act so that means even though you fall into this eight cases you can also Escape it by these exceptions the next one is the AI system intended to detect decision making patterns or deviation from preor decision- making patterns and it's not meant to replace or influence the previously completed human assessment without human uh review so that means if
            • 92:30 - 93:00 in a process there are some patterns and the only goal of the system is to detect these patterns or changes of patterns um that's okay that's not in the scope of the a act um and also because basically you have a human in the loop that is going to review it but if you remove this human then basically you fall into the AI act or into high risk in this case and the last one is the a i system is intended to perform a preparatory task to an assessment relevant for the purpose of the use cases like for
            • 93:00 - 93:30 example data preparation or some Preparatory tasks that then a human is going to assess then basically you're also not in the scope of the ACT if none of above then you are high risk use case so and all of these exceptions by the way as as I said before if you see in all the eight clusters of cases you will find the same exceptions so we just have them in each of the boxes so that you know any question about about
            • 93:30 - 94:00 this okay so I'll try not to go through all the trees because this will take a long time but I will just go briefly through all of them just high level the second one is management and operation of critical infrastructure for example supply of water electricity gas heating and so on if you're using AI to manage
            • 94:00 - 94:30 this or operate these different parts of the whole system then you you are high risk um but also if the AI component is only intended for cyber security purposes then you're not high risk that's also important to to know the third case is education and vocational training so if you're using the system to evaluate basically applicants to a university educational vocational trainings at all levels you are
            • 94:30 - 95:00 basically high risk of course always you have to check the exceptions or to evaluate the learning outcomes um to Ste the learning process of natural persons is also high risk or assessing the propi level of Education that an individual will receive or will be able to access in the context of their education or or vocational training then also that is high risk because you can discriminate people and then they might receive less or more education so this is also very
            • 95:00 - 95:30 dangerous um also monitoring and detecting prohibited behavior of students during test for example when they are copying with other students this is very dangerous so you also will fall into a highrisk um activity that has to be very well monitor um with high quality AI so if not of above then you are not high risk also for employment worker management and access so for example if the recruitment or or selection of natural
            • 95:30 - 96:00 people for jobs um is using targeted advertisement of jobs or or even screening candidates with AI or filtering applications that is also high risk for example you see here screenshots maybe you can get some targeted advertisement based on your profile this is also considered a high res use case so monster or LinkedIn jobs they have to be careful and also comply with the AI act um also making decisions affecting
            • 96:00 - 96:30 the work related relationships so promotions or fighting people at work this of course is a highrisk use case so you have to be very careful and just comply with all the obligations um another one is um yeah in case you want to provide private services or public service benefits for example giving a credit or just the price of an insurance or life insurance or health insurance as well and so on
            • 96:30 - 97:00 you have to be careful that's a highrisk use case as well or if you're using it for law enforcement so it's intended to be used um for on behalf of basically law enforcement authorities then you have to be very careful for example polygraphs or tools like that replacing officers they are highrisk use cases or tools that assess the risk of a natural person to become a victim of criminal offenses all of those are high res situations so yeah you have to be very
            • 97:00 - 97:30 careful in the law informance enforcement sorry um case and also if you want to evaluate reliability of the evidence for an investigation to prosecute criminals also is high risk or profiling of natural persons in the course of detection investigation and prosecution criminal offenses so all of these cases are high risk but always un forget to check the the exceptions of course that they are the same for for all the use cases then in the case of migration
            • 97:30 - 98:00 Asylum and border control management so if theist is ined to be used on behalf of public bodies so for example here you see in the airports you are replacing the officer by a machine just checking Bo passport control basically that's also falling here you have also polygraphs in the for for Asylum Seekers or migrants Seekers um Vis Seekers sorry what the name also if you want to assess a risk included including a security
            • 98:00 - 98:30 risk of irregular immigration or a health risk posed by a natural person who is trying to enter the the EU also this applies or if you want to assist competent public authorities for examination of applications Visa resident permits also you are applying as a high risk and if you want to detect recognize or identify natural persons um with exception of verification of traval documents then also you are you at high
            • 98:30 - 99:00 risk and the last case is administration of justice and Democratic process so for example if the I system is used by a judge or on their behalf to assist a Judicial Authority in researching and interpreting facts and the law and applying the law then of course you are very very dangerous and you should be classified as a highrisk use case or if
            • 99:00 - 99:30 you're ined to be used for influencing the outcome of an election or referendum also you should fall into high risk and you see here just a judge that's using AI to come up with a veric for example that's a very dangerous um activity so of course it's classify as as high risk any question about this uh Tre for classifying highrisk use
            • 99:30 - 100:00 cases so to sum it up there are two cases so case one is named Annex one case two is named anex 3 in anex one if your AI system is a safety component of a product or is a product that belongs to any of these um activities or or products Machines toys lifs etc etc and you have to go through a Conformity assessment then you are a highrisk AI system and the second one if you use
            • 100:00 - 100:30 case belong to these eight cases and you don't fall into the exceptions then also you are a highrisk AI system so with this now we can come to our exercise so we have also as always two minutes so you just need to vote if each of these use case is high risk or not so I will give you two minutes and then we can discuss our anwers so see you in two
            • 100:30 - 101:00 minutes
            • 101:00 - 101:30 e
            • 101:30 - 102:00 e e
            • 102:00 - 102:30 all right so time is up and we
            • 102:30 - 103:00 see that you have clear answers in the first one but second and third ones is a bit tricky so let's try to clarify all of them so who wants to explain us the use Cas one why you believe is a highrisk AI system
            • 103:00 - 103:30 so think about anex one so this is a device for medical a medical device if you go into the annex one you see medical devices or inv VTO diagnostic medical devices both of them are basically in the annex one so that means there are a product um that is in the list in that sense yes the medical
            • 103:30 - 104:00 device is a highrisk use case because is in the list presented in Annex one so that's think most of you basically wored that the answer and that's the right answer however in the use case to I see a lot of doubts so maybe anybody who wants to tell us what do you think it is or it's not a highrisk AI system so I answered with uh it is a high risk system because it's technically um management of critical
            • 104:00 - 104:30 infrastructure um and the Assumption was um that even if the data which is processed technically itself is not critical but the knowledge about the data would be critical um which is why I assumed um is uh technically high risk because if someone knows for example at which area there's a leakage if they want to damage they would have better knowledge to do so absolutely 100% And it's acting as a safety component so this dashboard
            • 104:30 - 105:00 you're using to monitor leakage is being used as a safety component to manage the operations of of this um basically Tech critical infrastructure so also I agree with your answer so this one it's a highrisk use case also from our perspective so anybody who already know who wants to share why not
            • 105:00 - 105:30 or maybe now they Chang their mind I think your explanation was really clear and really helpful so yeah so basically it is a highrisk case thank you very much and the last one is the AI system is used to scan the faces of employees at an aviation company to authenticate their entry into sensitive research and development
            • 105:30 - 106:00 areas to think a tricky one but basically it falls in the first category so the I system is intended to be used to remote biometric
            • 106:00 - 106:30 identification however if the system is to intended to be used only to verify that a person is a person so basically authentication then you have you fall into exception so you are not a high resistant but otherwise you would be a high resistant so the right answer is you are not a high risk because you fall into the exception of authentication so hopefully it's clear and that's exactly the exercise you have to follow for all of your use cases you
            • 106:30 - 107:00 should basically go to the Trees of annex one and Annex 3 identify if they belong there or they fall into exceptions and in that way you can understand if it's high risk or not very cool so any question about the high-risk Tre for today this this was the longest part of of the whole training so and the most important part of course as well any
            • 107:00 - 107:30 question okay then we can go now to the step number uh five is about determine if you are the provider of an AI system um sorry no I actually Skip One Step that was six so step five is to understand if transparency obligations apply to you so basically you have to ask just these
            • 107:30 - 108:00 questions in the tree do any of these categories describe your AI system so for example the system is directly interacting with natural people if that's the case but it's really obvious that you are an system then no additional obligations of transparency are necessary but if it's not obvious like happens today you see a lot of let's say thinking or not not even Instagram post or social media posts that you don't know if it's a human or if a b also on Twitter you have
            • 108:00 - 108:30 thousands millions of messages written by Bots if it's not obvious then also you have to ask more questions is a system authorized by law to detect prevent investigate and prosecute criminal offenses so if it's not then additional transparency obligations apply otherwise you have to ask another question is this system available for the public to report a criminal offense so if it's not then no additional obligations otherwise yes you have
            • 108:30 - 109:00 additional obligations so you see the trees can get really long really nested but that's basically um the simple ways to to read the AI act this is basically pages of articles describing um if it's or if transparency obligations apply or not but here the easy one is is you believe it's if you're not sure it's a natural person then you have to basically provide transparency obligations the next one the a system including GPA system generate synthetic
            • 109:00 - 109:30 audio image video or text content if that's the case you have to ask is the AI system performing an assistive function for standard editing or doesn't substantially alter the input data provided by the deployer then no obligations apply the other question also without obligations is if the system is authorized by law to detect prevent investigate and prosecute criminal offenses and also no obligation Supply otherwise you have transparency
            • 109:30 - 110:00 obligations every time you generate audio images videos tip fakes it's very important to to dentify this and you have some obligations but these obligations typically are very very mild so like for example watermarking um disclosing the data used for for this for the model so it's very very simple obligations so many people were expecting the a to be a bit more harsh with deep fakes or things like that but the obligations are actually very mild because also other
            • 110:00 - 110:30 laws of course if you're making a deep fake to to steal money from somebody by faking the voice of your mother of course other criminal laws suppli so but not the AI act itself next one is the AI system intended to be used for biometric categorization so is the is the AI system permitted by law to detect prevent and investigate criminal offenses if yes then no obligations otherwise yes you have obligations of course and the AI system is intended to
            • 110:30 - 111:00 be used for emotion recognition then is the AI system permitted by law to detect yeah basically the same question to prevent the take investigate criminal offenses if by law you allow it then no obligations otherwise yes you you have some transparency and obligations and then we have the next one AI system generates or manone Ates text audio or visual content that would fastly appear to be authentic or thoughtful and which features the
            • 111:00 - 111:30 picturs of people in this case more about defects appearing to say or do things they didn't say or do before with content then you have also some exceptions um so by default you have transparency obligations except if you man generate or manipulate text Audio Video in an authorized by L way or if the content that's generated forms part of an evidently artistic creative satirical and fictional analogous work
            • 111:30 - 112:00 or program so that means the a is not punishing deep fakes that are kind of satirical of course when there is a limit that you could use them as I said before to ask for money or other things then of course other laws will um come into place but the a Act is not basically giving you more obligations in that sense that's important to understand and not of above then you have transparency
            • 112:00 - 112:30 obligations next one the a system generates or manipulates text which is published with the purpose of informing the public on matters of public interest um let's say giving news for example or yeah that would be an example then you have also some exceptions if you aliz by by law or this AI generated content has undergone a process of human review or editorial
            • 112:30 - 113:00 control and where a natural or legal person holds editorial responsibility then you have these transparency obligations if not of both then also you have your transparency obligations so that's basically the chapter or step five about when transparency obligations apply for us also it's not too important because any ways these obligations are really small so in case you fall here it's very easy to comply so I mean it's important of
            • 113:00 - 113:30 course it's an obligation but it's an easy one to to do so that being said now we come up to the next exercise so also we'll give you um the same time so you can vote if transparency obligations apply to each of these use cases so see you in two minutes
            • 113:30 - 114:00 e
            • 114:00 - 114:30 e e
            • 114:30 - 115:00 okay so I think something went wrong
            • 115:00 - 115:30 with the with this um yeah the the
            • 115:30 - 116:00 buttons they were moved but I assume the order is the same so we can see the first two use cases with clear answers and the last one I with 50/50 okay so let's go to the first use case who wants to tell us why you transparency obligations
            • 116:00 - 116:30 apply so you have this MDR device with users so then that means the AI system directly interact with natural persons yes is it obvious to the person that they are interacting with an AI system probably not so because you have a device you don't know if it's software or if an AI or it's rule based and so on
            • 116:30 - 117:00 so it's not um because otherwise if it's very clear it's an system then yeah you have no transfer obligation also I mean it's up to you you can make it very clear this chatbot is done with AI llms blah blah blah then it's easier but in case it's not then also is assisting authorized by law in this case we don't think it's authorized by law to do this then yes additional transparency obligations apply so this is the right answer and I
            • 117:00 - 117:30 think most of you so 13 people wrote yes that's that's the correct answer the next one is the interesting one so a news media company uses gen to generate a video let's say Donald Trump so a political candidate singing a popular Taylor Swift song in a rock concert both of you think yes transparency obligations apply who wants to
            • 117:30 - 118:00 tell anybody um so it's based on article 502 so um it's generative content and it's not authorized by law it's no assisted function or for standard Ting being used
            • 118:00 - 118:30 we can assume that the data is accordingly altered which is why transparency is obligation yep I agree with that answer anybody disagrees or wants to tell us something else but very simply so you have this um video so please just make sure sure to put the transparency obligations that means just watermarket so it's clear it's an AI generated content but very
            • 118:30 - 119:00 good and the last one seems to be the tricky one so the first two I think were easy the last one is 50/50 so a video game company create a game which changes environment according to user emotions who wants to tell us about it yet again in article 53 um it's about emotion recognition and it's taking something which is not permitted by law due to um criminal offense which is why
            • 119:00 - 119:30 transparency obligation once again 100% so basically transparency obligations apply so yes is the right answer very good well thanks a lot so y this chapter of transparency obligations um it's not too complicated and also to implement it is not too hard the obligations are very small but you have to keep it in mind and remember you could be high risk and at the same time have transparency obligations so you have to do all of them together so with this I think it's time now for our next
            • 119:30 - 120:00 break um and what's left is going to be the part of the roles um just we will show also a couple of slides of your obligations depending on your risk class um but with this we will be over approximately um at 12 today so we will keep it at three hours for today so you can have lunch so in the afternoon also my colleague ronning is going to present sent um the same Workshop again again to a different batch I believe so with this then let's have a short break of 10 minutes and let's meet again at
            • 120:00 - 120:30 11:15 to see you in 10 minutes
            • 120:30 - 121:00 e
            • 121:00 - 121:30 e
            • 121:30 - 122:00 e
            • 122:00 - 122:30 e
            • 122:30 - 123:00 e
            • 123:00 - 123:30 e
            • 123:30 - 124:00 e
            • 124:00 - 124:30 e
            • 124:30 - 125:00 e
            • 125:00 - 125:30 e
            • 125:30 - 126:00 e
            • 126:00 - 126:30 e
            • 126:30 - 127:00 e
            • 127:00 - 127:30 e
            • 127:30 - 128:00 e
            • 128:00 - 128:30 e
            • 128:30 - 129:00 e
            • 129:00 - 129:30 e
            • 129:30 - 130:00 e e
            • 130:00 - 130:30 all right right so we can slowly come
            • 130:30 - 131:00 back please react when you're back so we know we can
            • 131:00 - 131:30 continue can see a couple of people reacting very nice okay so now we come to the last step so step number six so what is the role we saw at the beginning that you could be basically the provider of ani system when you develop basically the code to train the system yourself so in this case if you look at the tree the
            • 131:30 - 132:00 first question is you're the provider if you develop the AI system or purpose AI model um and also you're basically putting your trademark on it so that's basically the definition of Provider as we saw in the slides before um also if you go to the second part there is another role that is named the downstream provider and maybe let's check the the tree here so will you integrate a general purpose AI model
            • 132:00 - 132:30 think about gp4 if you want to integrate it in a AI system in this case think about chat GPT is a system that is around the model so that users can just interact with it and the system by the way can be as simple as just an UI for the user then you need to check these conditions so if the model was released under open source uh and free license then basically the the company with this AI system in this case CH GPT becomes a
            • 132:30 - 133:00 downstream provider of the general purpose AI system and that means so basically very important so when you have a model you have some model obligations we saw before but if you integrate a model into a system then the company in charge of the system um they have to check if the model basically the use case in this case falls into low risk limited risk high risk or prohibited and these are basically what we know as Downstream provider of a
            • 133:00 - 133:30 general purpose AI system and in case this model has high capabilities as we saw also before then you are also the downstream provider of a general purpose AI system with high capabilities and if none of a v then you are just the downstream provider of a general purpose AI system so basically Downstream provider is nothing else that the company that puts the model into a system or a UI or anything that could be valuable to the users um and the obligations they have
            • 133:30 - 134:00 is just about the classes if you low risk limited risk or or high risk then the next case um if you use the AI system on your Authority you remember this is very close to the definition of the provider of an system but you have to check also two things so if the use of the system occurs in the course of a personal or nonprofessional activity as I said before me just cing an system at home then I'm not a deployer but otherwise um I'm the
            • 134:00 - 134:30 deployer but there are three exceptions also so in case I put the name of my company let's say apply AI in the system that I'm deploying I also become the provider by mistake this is something that you have to really avoid in case you're just the provider of a system but you put the trademark of your company on it you will have much more obligations because you become the provider so you have to be careful with that and try to avoid
            • 134:30 - 135:00 putting this name or trademark on a highrisk AI system also in case I I'm the deployed of a system but also I make a substantial modification to the system that has been already placed in the market then also I become the provider of the system that's very important and the last one is if I modify the inter the purpose of the system so let's say the the provider the Train the model and they said please only use for people above 18 but then I the deployer and I
            • 135:00 - 135:30 want to use it for different people for different internet uses then also become the provider of the AI system with all the provider obligations which are very very large so you have to be careful with these three modifications um yeah so that's the key message we want to do and there are other roles less important for you but the Importer distributor authorized representative that they have also very small obligations but for today we we only concentrate on the most important
            • 135:30 - 136:00 roles um for you so now I'm coming very briefly to to the slides so because we will have no section four is what are the implications of each risk class because as you know as already mentioned if you are the provider or the deployer you will have different obligations if you are prohibited high risk limited risk low risk or general purpose and an AI you have a general purpose AI model and for that apply AI we develop basically what we name an envelops um use case
            • 136:00 - 136:30 canvas as doing as a document where we have the role provider or deployer also the risk class so high risk limited risk low risk gpam and so on and also its obligations so for example if you are a provider of high-risk AI systems you have as obligations data governance transparency recordkeeping technical documentation risk management accuracy Robos cyber security human oversiz so you see you have a large amount of obligations but
            • 136:30 - 137:00 if you have only limited risk then you have only watermarking for Gen and disclosure as obligations and as mentioned before if you're low risk you only have a voluntary code of conduct that you you can follow or not but also in case you are doing a general purpose AI model so if you are basically open source you have very small obligations compared if you're proprietary so you have two more three more obligations when you're a propar
            • 137:00 - 137:30 and in case you are a GPM with systemic risk you have model evaluation reporting cyber security and mitigation of systemic risk on top all the obligations um that of of commercial or open source models so that's very important so the EU is just trying to simplify the obligations for open source systems so that people don't get stuck and keep develop basically open source um system um for general purpose AI system also you just need to provide the test of any modification or F tuning let's say Rack
            • 137:30 - 138:00 or any fine tuning that you're doing to your models so you need to tell what data you use for this modification what metadata keep lineage and so on and also of course comply to all the use case obligations so low risk limited risk or high risk for deployers on the other hand obligations I wear smallers so if you see um for General purpos system it's just obligations about your risk class for low risk there is no obligations at all for limited risk is
            • 138:00 - 138:30 just disclosure and consent of the data sets being used and for high risk it's actually a bit higher so you have to uh obey the instructions of use of the provider so because you're the deployer you have to follow them you have to do ever logging monitor incidents data protection impact assessment make sure the input data has certain data quality uh make the fundamental rise impact assessment and have human oversight in place but this is basically all the obligations uh that you have to fulfill
            • 138:30 - 139:00 depending on your risk class and your role in the according to the act so any question about this overview of obligations okay so in case there are no questions also maybe just another slide making a bit simpler so assume you have a general purpose AI model let's say you have gp4
            • 139:00 - 139:30 this is deployed by a big company so basically in this case with systemic risk um this company needs to make sure to make all these general purpose AI moral obligations with systemic risk and now let's say we have a company Supply AI we want to integrate gp4 into a model for I don't know let's think about a high-risk use case for example a medical device this chatbot that we were thinking about in the use case one then basically we become the downstream provider that means we have
            • 139:30 - 140:00 to evaluate the risk class of this use case so this case it's a chat boss for medical devices for this diagnosis and then evaluate if it's prohibited high risk limited risk or low risk and comply to these obligations um of course in this case we are creating the system and let's say now the user of the system is not actually apply AI but it's another company let's say or whatever BMW so in that sense they will become the the
            • 140:00 - 140:30 users or deployers of the system and they have to fulfill all these deployer obligations or hybrid systems so that's basically how the value chain um looks like uh from the general purpose AI model provider onto the the deployer so hopefully now it's a bit more clear and with this we can just jump to the last section and for today so what are the implications of each risk class so I briefly touch on this but let's go one by one so in case your system is low risk
            • 140:30 - 141:00 already said it you have just a voluntary code of conduct to follow so you should align with your internal stakeholders on how to implement this code of conduct like some companies for example they want to together with other companies develop it to have a common code of conduct other companies actually say I don't care it's not mandatory so I do nothing also it's valid um the recommendation is even though it's mandatory try to have a baseline because you never know when you need to develop
            • 141:00 - 141:30 a high-risk AI use case when you have a good Baseline it will be easier for you to implement this high-risk use case because maybe it's a competitive advantage that your competitor will have and then you might be in a bad situation in case you don't have it so that's just our recommendation in the case of limited risk occurs when a natural person is exposed to an AI system we just have to inform the person that they are interacting with AI with this watermarking for example disclosure and and consent in general so following
            • 141:30 - 142:00 options are recommended so consider an ux UI design and testing so just check if the AI is obvious or not and based on that provide the right information also prepare clear Concepts and instructions for the human oversight person so normally an I syst that is high risk needs to have a human oversight person that's operating the device or the system so you need to provide of course clear instructions for them and for J as I said before establish a process to easily Watermark
            • 142:00 - 142:30 videos pictures depending on on the content type and also check other legal requirements from other regulations like privacy gdpr copyright Etc of course for prohibit it I said it multiple times today stop doing it and basically an OP also maybe modify the intended purpose to convert it in a highrisk use case that will be in case it's very important for you to have a prohibited systems there are ways to
            • 142:30 - 143:00 modify the description into the purpose and try to fit it into the step four high RIS system in case it's possible of course not always it's possible um then in that case you have to just drop the use case um yeah so but very important make sure you don't have a deploy prohibited system by February uh next here and then we have high risk um of course as I said before there are a lot of requirements like technical documentation transparency risk management accuracy at the use case
            • 143:00 - 143:30 level but also at the organizational level you have to develop a quality management system with all these policies and make sure to implement the policies in an envelops uh process or tools so that you can basically do data governance as efficiently as possible get data lineage data versioning model versioning and all these mlos activities in an easy way um that's basically the content of our mlops re training that we also basically offer as part of thei act
            • 143:30 - 144:00 the next one of course you need to conduct a confirmity assessment so after you believe you comply so there are two cases one is um basically a third party notified body they have to check that you're compliant but the other case is also depending you're I think it's in anex one or anex 3 actually I forgot the the order order of which of both but one of those is is just a self confirmed assessment is also enough so to to be compliant um and that's also basically
            • 144:00 - 144:30 the two cases for Conformity assessment so understand if you have to do it self assessment or you need to have a notified body uh doing it for you after that you will have fixed the C Mark and register the a system into the EU database and then you have to monitor the use case over time to make sure that it's compliant over time and also when you are a deployer of our high-risk a system as as you saw in the diagram you have also qu some
            • 144:30 - 145:00 obligations so you have to follow the instructions of use assign a natural person for human oversight ensure thata quality of input data in the provider in case of incidents keep logs uh carry out a fundamental rights impact assessment and cooperate with national competing authorities um the last one is as we mentioned for GPA models already so you have basically some obligations where you have systemic risk like model evaluation uh you have to have high
            • 145:00 - 145:30 accuracy and you have to be very close to state-ofthe-art also risk assessment and mitigation techniques to avoid risks report serious incidents and have cyber security protection um for the other case you need to drop up a technical documentation make information available to Downstream providers respect copy law publish summary of training data and you might apply a code of practice of or harm standard and the last case is when you are a DST stream provider so remember so you have a model you
            • 145:30 - 146:00 integrate it into a system then you are the downstream provider your obligations are nothing else that low risk obligations limited risk obligations or high risk or prohibited risk um obligations so make sure to to check them also in advance so very important of course um here you have the timeline of the AI Act as I said before in February so that means in six months after August prohibited systems will kick in into place and then in August 20125 general purpose AI models will need to be
            • 146:00 - 146:30 compliant with all these obligations that we mentioned and then after August 2024 there are two years for The Limited risk and high resistance defining anex three to be compliant and then one more year for the high resistance defined in Annex one for existing product safety law loc cases so make sure sure that you don't start too late so the recommendation is actually to start today because the high risk cases are very complex and they will take a long time to actually to develop the right
            • 146:30 - 147:00 qms and Technical documentation so really don't start today in case you really need those use cases to start as soon as possible so of course next steps we are now just scratching the beginning as I said at the beginning so we are in the step of applicability but also um once you understand your applicability and obligations you have to implement them in your company so definitely you need to set up your quality management system in case you're high risk and and produce the technical documentation in an as
            • 147:00 - 147:30 automated way as possible so if you want to also read more about how the process we establish at apply AI you can just follow this link we will share the slides with you but we just explain also what are your next steps um in your compliance Journey so briefly I just wanted to show you one slide so that's how we structure this trainings so today we are basically in the risk classification training but before this typically we have an intro to the AI act training so you can see it here um we
            • 147:30 - 148:00 are now in the workshop number two risk classification training but also we have a technical mlops re training and a risk management uh training also um so but basically today we're just here um yeah so keep in mind that also get ready from the technical side so because most of the obligations are just merely Technic and as I said before also make sure to prepare at the use case level or level and upscaling in your companies to comply with the AI act so with this
            • 148:00 - 148:30 maybe let's now shoot the next survey in Mente so I will ask the same question as I asked before so based on the risk class definition so now we went super deep How likely is that your team has at least one AI system that is prohibited high risk Limited risk or low risk please just vote on M I will share the the link again so you can vote let's see what's your
            • 148:30 - 149:00 opinions so with see a clear pyramid so far so let give you one more minute
            • 149:00 - 149:30 okay I see we have probably one
            • 149:30 - 150:00 prohibited use case that's very interesting so to the person who brought it so just make sure to shut it down before February but first of all confirm that's actually prohibited and as we recommend maybe there are some ways to modify the purpose of the use case internet use so that it becomes high risk um so in case it's very important for you just just always try that and maybe you can implement it very nice okay so with this also we can go now to the Q&A so is there any
            • 150:00 - 150:30 question from the audience about any of the topics we covered today so remember we follow the six steps first of all is it Ani system or gen system are you in the scope are you prohibited are you high risk do transparency obligations apply and what is your role because based on that we just come to our canvas and we see our obligations and we just need to prepare ourself to apply them in a use case and for that of course we have deep Dives in each of these obligations but
            • 150:30 - 151:00 that's not part of today's training that's part of the mlops re training so any question in general for today so now it's the Q&A part for for today's session
            • 151:00 - 151:30 okay so in case there are no questions so I will invite you to come back to the mother board and then we have the next frame is a feedback form so as always we're trying to improve our Workshop so please we ask you to write down what do you like about the workshop what you dislike of course what can we improve so over time we're always trying to improve making simpler and please also vote from 1 to 10 one being horrible 10 being awesome how much you like today's training we will give you two more
            • 151:30 - 152:00 minutes and after that we're going to close the session so please make sure to vote today we appreciate all your comments so
            • 152:00 - 152:30 that's the the only way we can see what works and what we can improve so really appreciate if most of you can can vote and write your comment in the in the last frame so see you in two minutes
            • 152:30 - 153:00 e
            • 153:00 - 153:30 e
            • 153:30 - 154:00 e e
            • 154:00 - 154:30 all right so time is up so I want to
            • 154:30 - 155:00 thank you everyone because you make it very interactive that was really good so I really appreciate um all your yeah interactions during the exercises and thanks a lot for your feedback we will definitely take a look at it and try to improve it over time for for the next round but thank you very much everyone so we make it a bit faster as expected we didn't have too many questions this
            • 155:00 - 155:30 time but that's actually a good thing so also you gained one hour more for your day um yeah so thank you everyone sorry thank you very much everyone for joining and wish you a very nice day have a nice um today thank you thank you bye-bye thank you for the day thank you very good thank you thank you thank you