KIT+ 2. Runde - Intro to AI Act Training

Estimated read time: 1:20

    AI is evolving every day. Don't fall behind.

    Join 50,000+ readers learning how to use AI in just 5 minutes daily.

    Completely free, unsubscribe at any time.

    Summary

    The video is an introductory training session on the European Union's AI Act, presented by the appliedAI Initiative. It aims to familiarize viewers with the key features and practical considerations of the AI Act, especially for founders, lawyers, policymakers, and engineers. The session covers the reasons why the EU regulates AI, the structure and application of the AI Act, and details the compliance journey for companies. The compliance journey includes checking if the Act applies to you, identifying obligations based on risk classes and roles, understanding the compliance process, and maintaining conformity over time. The video emphasizes AI literacy, technical standards, and the roles of various authorities in ensuring compliance. Questions from participants help clarify complex aspects regarding risk classification, obligations, and maintaining compliance.

      Highlights

      • The training session introduces the EU's AI Act and its importance in regulating AI systems within Europe 🇪🇺.
      • It covers the compliance journey: checking applicability, identifying obligations, proving compliance, and maintaining it 🔄.
      • The session emphasizes the necessity of AI literacy among employees as part of meeting compliance standards 📘.
      • Discussion includes the roles of providers and deployers in relation to AI systems and their respective obligations 🤝.
      • Clarifies that general purpose AI models have unique considerations in compliance and liability settings ⚖️.

      Key Takeaways

      • The AI Act is the EU's way of regulating artificial intelligence to mitigate risks and provide legal clarity 🛡️.
      • Understanding your role (provider or deployer) in relation to AI systems is crucial for compliance 🎯.
      • Risk classification within the AI Act defines your obligations—prohibited, high risk, limited risk, or low risk 🏷️.
      • Adherence to the AI Act involves training employees, following technical standards, and proving compliance 📚.
      • Timelines are critical, with phased obligations for different AI system classes starting from 2025 📅.

      Overview

      The training video serves as a comprehensive guide to understanding the European Union's AI Act, aimed at a wide audience including lawyers, policymakers, engineers, and business leaders. The AI Act represents a significant regulatory framework intending to mitigate the risks associated with AI technologies while offering legal clarity across the EU. Participants are engaged through discussions about the Act's structural components and practical implications.

        In the video, the instructor breaks down the Act's exhaustive compliance journey, which involves assessing if an AI system falls under the Act’s scope, identifying risk classifications, understanding provider and deployer roles, and aligning with compliance obligations. The video offers practical advice on meeting these requirements through employee training, applying technical standards, and understanding procedural documentation.

          The session is interactive, with participants asking insightful questions that address real-world uncertainties about risk evaluation, compliance procedures, and enforcement mechanisms under the AI Act. The training underlines the importance of AI literacy and highlights the phased approach to implementing and enforcing AI regulations. It provides a stepping stone for companies aiming to responsibly integrate AI into their operations and conform to evolving standards.

            KIT+ 2. Runde - Intro to AI Act Training Transcription

            • 00:00 - 00:30 all right recording let me just go ahead and share my screen okay I hope all of you can see this perfect um so welcome today training
            • 00:30 - 01:00 we've titled it an introduction to the AI act um and this is sort of our goal to or our our our way of introducing you to the the main features of the AI act and also to try and highlight some practical considerations for you as Founders as lawyers as policy people as Engineers um and try and help you identify what this means for your business and and for your work
            • 01:00 - 01:30 work all right so just very quickly about applied AI we started off back in 2017 um as a branch of onum um since then we've grown quite significantly we're a group of 150 OD people now um and it's hard to describe what we do but we're part consultancy part Think Tank part work and our goal overall is to help Enterprises both small and large um build trustworthy AI
            • 01:30 - 02:00 products sorry there's still more people filtering in and I have to um admit them um and over at applied AI we're a pretty interdisciplinary bunch of people we're primarily Engineers but we also have a lot of business strategists and a lot of uh policy folks um apart from help helping companies build um AI systems we're also very involved in policym we're a part of the oecd we're
            • 02:00 - 02:30 also a part of the gpy and more recently we've been selected as members of the um EU consultations on general purpose AI models so we're very heavily involved also in trying to shape the rules and interpret them and make them more accessible to companies um such as yourself so very quickly about me um I've been working on technology policy for roughly 6 years now
            • 02:30 - 03:00 I'm a trained lawyer and a political scientist and for the last year year and a half I've worked with applied AI pretty much exclusively on the AI act trying to interpret what it means interpret its substantive provisions and help companies better understand um what's expected of them and in terms of what we're trying to do today there's three major goals um first we're going to try and understand why the EU regulates AI second a very quick primar
            • 03:00 - 03:30 on how it does this um specifically what are the key structural instruments it uses and finally we're going to dive into sort of the heart of the AI act what we call the compliance Journey um oops sorry about that there's still people filtering in apparently all right there we go in terms of times the first two parts are going to take maybe 10 or 15 minutes uh
            • 03:30 - 04:00 depending on how things go and we're going to reserve the rest of the time for the last part which is a a significantly deeper dive into the AI act um and before we get into the subject matter itself maybe just a few uh ground rules um you're welcome to interrupt me anytime if you have any questions the whole presentation takes anywhere between 60 to 120 Minutes depending on how how engaged you are and
            • 04:00 - 04:30 how many questions you have so feel free any point to just raise your hand or turn your mic on interrupt me ask me questions contribute to the discussion with with your own comments and experiences as well um and then finally at the top of the hour around 3 p.m. we'll also take a bit of a break all right so let's get started unless there are any questions so far
            • 04:30 - 05:00 okay in that case we'll jump right into why the EU regulates AI um and three primary reasons right first to mitigate against risk to health safety and fundamental rights um on your screen are just some examples of the harm that AI systems have caused right over to your left you have the um Dutch child benefit scandle I don't know if you've heard about this but it was an instance where a Public Authority used an system to
            • 05:00 - 05:30 deny welfare benefits um to thousands and thousands of people um in the middle you have instances where AI systems have been used as medical devices and it turns out that they haven't been as accurate as they claim to be leading to bad diagnosis and and adverse Health outcomes you also have cases where AI systems adversely affect safety good examples are autonomous vehicles that don't perform as intended or aren't robust um in the context of the
            • 05:30 - 06:00 environment they're operating in and then of course um in in 2022 with the Advent of of language models or Foundation models um we also have more and more cases of AI being used to fuel misinformation and manipulation and so to address some of these risks and there's there's hundreds of these risks that have now been documented by the oecd they have a um an incidence database you also have the MIT which recently released a risk repository
            • 06:00 - 06:30 trying to document these kind of risks in an effort to regulate them govern them and to prevent AI systems from causing these kind of harms um the EU decided to sort of legislate um AI systems and how they're used uh second a lack of legal Clarity has been a barrier for companies who are trying to adopt AI systems they're uncertain about their role their liability how they should be using the data they have
            • 06:30 - 07:00 access to um and and what type of product specifications they need to build their AI systems on there's also been some concern about regulatory fragmentation so companies would not like to operate in the EU if different countries have different laws regulating them and so a lack of legal Clarity has been has been a key barrier to adopting AI systems more widely and the ACT is sort of Europe's response to this fragmentation and and this lack of clarity
            • 07:00 - 07:30 and then finally over the past few years many countries around the world have been releasing AI policies um but fewer but few of those have translated into concrete legislative requirements we're seeing more and more now from different jurisdictions but the AI Act is Europe's way of trying to show International Leadership over the regulatory regime for AI systems and much like it did with um the rules on data privacy the EU
            • 07:30 - 08:00 wants to externalize its own Norms values and regulations in the hope that setting stringent um uh legal requirements for for Market access will force other countries around the world to adopt laws that are similar to its own all right any questions so far with that very quick summary on why the EU wants to regulate AI
            • 08:00 - 08:30 once again feel free to just turn your mic on or raise your hand happy to take questions anytime all right um so we learned a little bit about why let's take a very quick look um also at how um and there's two things here right so if somebody asks you to describe the AI act in in seven sentences or less this is sort of what you'd say to them right the ACT is fundamentally a product safety regulation
            • 08:30 - 09:00 its goal is to prevent health safety and fundamental rights risks um some of its key features are firstly that it is a regulation meaning that it applies to every member state in exactly the same way second it is a horizontal regulation meaning it applies to all Industries in pretty much exactly the same way so whether you're in finance um medical industrial manufacturing the same set of rules apply to you irrespective of your
            • 09:00 - 09:30 domain next by and large the ACT regulates the intended use or intended purpose of the system and not the technology itself you'll see there's a bit of a star there and we'll talk about um why that exists a little bit later it's created unique rules for general purpose AI models which is sort of an exception to the rule of governing only um the intended purpose fourth the ACT uses a risk based approach meaning the
            • 09:30 - 10:00 riskier the ACT deems your AI systems to be the more obligations you have to conform to um fifth like all products certain AI systems have to undergo what is known as a Conformity assessment procedure it is a bureaucratic Affair of getting a an appropriate authority to recognize that your system has been built according to specifications um and then finally like all products AI systems are also now subject to Market surveillance meaning there is a web of
            • 10:00 - 10:30 Institutions and Authority at the EU level and at the national uh member state level who will be overseeing how these products work um in the market all right um next in terms of how you're supposed to read the AI act right it is a pretty intimidating document it's pretty long uh but there's sort of one key feature you need to know about it that it has as recital which talk
            • 10:30 - 11:00 about the intent of the legislator um when they were creating certain substantive Provisions you then have the chapters and the Articles themselves which Define what type of obligations you have to meet and finally you have the annexes which are supplemental or complimentary information and are meant to be read together with the Articles and so when you're trying to make sense of the AI act it is important to read all three of those um features together
            • 11:00 - 11:30 because anyone in isolation is not going to give you enough context to understand what it is you need to be doing um as an Enterprise or just as part of your job all right any questions so far once again feel free to just turn your mic on there's your hand all right in that case we're going to
            • 11:30 - 12:00 jump into the main part of today which is what we call the compliance Journey um like I said before the act itself is a pretty intimidating document it's very large very hard to understand often very complex this is our effort at streamlining those obligations um it is an effort to give companies and individuals a place to start and a place to end um and sort of give some meaning
            • 12:00 - 12:30 and direction to the ACT and structure it in a way that is understandable to most people um and to do that we we've designed what we call a compliance Journey we start all the way at the left with checking if the ACT is even applicable to you in the first place where we identify um what the risk class of your system is and what your role is in relation to that system we then move on to identifying obligations where we say once you know the ACT
            • 12:30 - 13:00 applies to you once you know what your risk class is um we look at what those obligations might look like we then move on to meeting obligations which is where you say I know what I have to do um but how exactly am I supposed to do it um next we move on to demonstrating compliance um which is where we talk a little bit about who you have to prove to that you perform some of the obligations that are relevant to your system and and finally we're going to
            • 13:00 - 13:30 talk a little bit about maintaining compliance which is what are the activities that you need to perform once the system is actually um in the market itself all right so we'll start first with um checking if the ACT applies to you in the first place um and there are sort of two key features you need to know of um the first is how AI systems are defined and second um what the scope of the AI Act is right so let's start
            • 13:30 - 14:00 with the first what is an AI system as the ACT sees it it defines it as a machine based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment and for a given set of objectives infers what the outputs are and is capable of influencing physical or virtual environments so you have these seven criteria and we'll take uh a closer look at them in in a
            • 14:00 - 14:30 second second you have the scope of the AI act primarily the act governs systems that are placed on the market meaning sold or put into service meaning an Enterprises uses it within within the context of their own um activities um or whe the outputs of a system affect residents in the EU so those are sort of the three conditions when the ACT applies to you for sure as an organization but there are also
            • 14:30 - 15:00 instances when you are not in the scope of the act for example if you are building systems that are designed to be used only by the military um in that case the ACT doesn't apply to you at all and so we'll take a a slightly closer look at both of these um conditions so let's start with the definition of an AI system itself right um like I said earlier there are seven characteristics that go into the definition and if you were to take a close look at it you'd
            • 15:00 - 15:30 realize that by and large those characteristics apply to systems whether they're AI or not um and so the key distinguishing criteria from our perspective at applied AI but also from from the perspective of the act and its recital is inferences from inputs right this is what you would use to distinguish AI systems from traditional software based systems um important to know that the techniques that enable
            • 15:30 - 16:00 inference according to the ACT are standard machine learning approaches this is something that's familiar I think to most people but the ACT also includes logic and knowledge based approaches that infer from encoded knowledge or symbolic representations within the definition of AI systems um and just very anecdotally to share with you this inclusion has caused um quite a stir for a lot of companies because there is now a wider subset of systems
            • 16:00 - 16:30 that they have to include under the banner of AI systems and it's not something um they previously expected to do and so just to shed light on how sometimes this can create a bit of a gray area for companies let's take the example of a predictive maintenance system right um on the left you have an example of a system that uses machine learning to analyze realtime sensor data and then it makes certain predictions about about whether a component or or a
            • 16:30 - 17:00 feature is going to fail or not um very concretely within the definition of an AI system in the middle you have a A system that uses simpler statistical models to analyze sensor data and then Flags potential issues if certain predefined conditions are met um this is sort of an example of of that gray area I was talking about earlier if it's a system that uses some sort of logic or knowledge based approach it could
            • 17:00 - 17:30 potentially count as an AI system but depending on how you defined it or how you describe it it could also fall out of the scope of the AI act altogether um and then all the way to your right you have a simple threshold based alarm system which just triggers um some sort of warning if if a particular defined condition is met all right any questions about how AI systems are are defined under the
            • 17:30 - 18:00 ACT again feel free to just raise your hand or or turn your mic on all right let's also then take a closer look at general purpose AI systems which are a subset of AI systems um under the the ACT um there is a a
            • 18:00 - 18:30 complicated definition that the ACT gives you but the rule of thumb is that the ACT distinguishes between general purpose AI models and general purpose AI systems and generally if you access a foundation model or a large language model either via a repository like hugging face or via an API what you have is a general purpose AI system um and just as an example GPD 3.5 or 40
            • 18:30 - 19:00 are examples of general purpose AI models and chat GPT is an example of a general purpose AI system because it has certain engineering artifacts that are built on top of that model um and although general purpose AI systems are a subset of AI systems in your own company and in your own work it is important to distinguish between the two
            • 19:00 - 19:30 um because some of the obligations that follow um can be slightly different any questions so far about general purpose AI systems or AI systems all right then we take a closer look at the second criteria that determines if you are within the score scope of the ACT meaning if the ACT even
            • 19:30 - 20:00 applies to you um and that's the scope so like I said earlier primarily the act governs you if you place an AI system on the market or put it into service in the EU or if the outputs of your system affect residents here um in the EU that does mean that there are quite a few activities that are out of the scope of the AI act meaning even if you have an AI system some of its rules may not
            • 20:00 - 20:30 apply to you um for example on the way all the way to your left if you're building this AI system entirely in Europe but you're only deploying it or selling it to jurisdictions outside um like China or India or or any other jurisdiction the rules of the AI act do not apply to you unless the output of those systems affects residents here in the EU uh in the middle let's say you're still performing research and
            • 20:30 - 21:00 development on the system it's still a part of your Innovation Pipeline and you haven't decided if you want it to become a product or not those type of activities are also outside the scope of the AI act until you decide that it's something you want to actually bring into the market um you have to be a little careful here though because testing in real world conditions is still within the scope of the ACT there is a a regime um called testing in real world World conditions under the act
            • 21:00 - 21:30 when you have to reach out to certain authorities and and get approval for testing these systems when when real people or real situations um are involved and then finally on the right you may have already placed a system on the market um and if you've done so before the ACT becomes generally applicable which is in August 2026 um some of those rules may not apply you at all unless you
            • 21:30 - 22:00 significantly change um the system so there is some complexity to deciding if you actually do have to follow any of the rules in the AI act whether it applies to you whether it applies to your system or whether it applies um to your Enterprise all right any questions about that first section once again feel free oh sorry indeed I
            • 22:00 - 22:30 have a I have a question can you uh go please one slide back sure on the left side when you speak about the output um if the output is a kind of special component or part which is then sold to the U is this in scope because I mean if something was designed or built where using AI power tool or databases but the output is a physical part like a special designed of chip for for example is this as mean as output or is it only
            • 22:30 - 23:00 is it out of scope here no it won't it won't that would be out of scope out outputs in the sense the direct outputs of an AI system right so if like a a a credit worthiness algorithm for example um making certain decisions about a person's eligibility for a loan that's the example of of an output so it has to have some causal proximity um to the person being affected fine thank you yep welcome
            • 23:00 - 23:30 yep any other questions about the definition of an AI system or or the scope of the all right then to very quickly summarize that section right um applicability depends on whether you actually have an AI system in the first place and you have to be a little careful here because apart from machine learning approaches which is what most of us would traditionally understand as being AI systems um logic or knowledge
            • 23:30 - 24:00 based systems are also included within the definition and this is something that has caught companies off card before um and so you have to be very careful about what kind of capability you're dealing with um second it depends on whether you're in the scope of the ACT um article two has quite a few exceptions we've shared with you two or three that are typically more relevant to Enterprises or companies um but if it is something you're concerned about I'd recommend reading article two of the AI
            • 24:00 - 24:30 act um a little more closely and then finally general purpose AI systems are subsets of AI systems under the ACT um but it is important when you are performing an inventory within your Enterprise or within your company to distinguish between the two because sometimes there are special obligations for general purpose AI systems all right with that we move on to identifying obligations um and so if
            • 24:30 - 25:00 you were to start this compliance Journey you'd say I know I have an AI system I am in the scope of the AI act and so the question is what do I have to do now um and this is a pretty large section so we've broken it down sort of into four bits um the first is identifying which risk class your AI system belongs to Second identifying your role in relationship to that system um and then finally we'll sort of bring it together and and show you what all
            • 25:00 - 25:30 the possibilities are in terms of obligations and then finally we'll also highlight a bit of a gray area which is general purpose AI models what you would typically think of as Foundation models or large language models the obligations for the models themselves aren't as much of a gray area as much as what happens if companies were to find tune or modify those models um and whether or not there are any obligations associated with with those activities all right so we start then
            • 25:30 - 26:00 with identifying the four risk classes um this figure should be pretty pretty familiar to most of you now if you've read anything at all about the AI act over the last few years um the ACT splits systems into four risk categories starting with systems that pose an unacceptable risk to the health safety and fundamental rights of people in Europe systems that pose a high risk systems that pose a limited risk and then finally if your system doesn't pose
            • 26:00 - 26:30 isn't a part of any of those three categories it's deemed to pose um AO risk and so two things to know here right one is that sometimes your system can be a part of two categories at once this is only true for high risk systems and limited risk systems so you can be both and in those cases like we'll see later later the obligations stack um on top of each other um and second this pyramid is also supposed to represent
            • 26:30 - 27:00 the commission's expectation of how many of these type of systems are going to be out there in the market so the commission and other EU authorities think that by and large most companies and Enterprises are going to use lowrisk systems and there will be very few high-risk um and prohibited AI systems but we'll have to see how that expectation uh plays out and so for the next 10 or 15 minutes we're going to take a closer look at each of the these risk categories just so you're better
            • 27:00 - 27:30 able to identify um whether any of these categories apply to to you or or to your work all right so we'll start with um prohibited AI systems right there are eight prohibited systems um in the AI act and we'll just go through all eight in in in very little detail so the first are systems that use subliminal techniques sublim subliminal techniques are defined as outputs that operate below the
            • 27:30 - 28:00 threshold of human perception um right these outputs should then cause some sort of Behavioral change in individuals such that they cause some sort of harm um to those individuals um we've not seen too many good examples of these type of systems in the real world back in 2021 when the law was um introduced by the commission the example that it provided was let's say you have a truck
            • 28:00 - 28:30 driver and the company that owns that truck um plays a an inaudible tune um within that truck that keeps that driver driving for longer than is safe um in my personal opinion that's a bit of a an unrealistic example um but we haven't seen too many good examples of of systems that use subliminal techniques so that's the first type of system that are prohibited um under the AI act the
            • 28:30 - 29:00 second type of systems are those that exploit the vulnerabilities of person such that it causes them to alter their behavior and thus causes them some sort of harm uh vulnerabilities is defined pretty broadly so this can include vulnerabilities related to age so if you exploit vulnerabilities for older people or or smaller children it can also be um some form of disabilities physical or or
            • 29:00 - 29:30 mental and it can also be vulnerabilities that result from a person's um socioeconomic status um and so just as an example think about um an ingame um llm powered ofar or chatbot um that tempts children to to use their parents' credit cards to make more purchases um within games right that would be an example of exploiting the vulnerability of of a child because they do not have the maturity to make smart
            • 29:30 - 30:00 financial decisions the third type of system that's prohibited are systems that use biometric categorization so they use biometric data to categorize people into certain um uh groups um and these groups are explicitly defined in the AI act one example is sexual orientation but there are quite a few that are defined in article five um next
            • 30:00 - 30:30 you have systems that um are intended for social scoring so this is where you use an AI system that is capable of collecting lots of discrete data points on a person's personal social and political behavior on the basis of that behavior it assigns them some sort of score um and that score is used in some adverse social setting right to deny um welfare benefits for example um that's an example of social
            • 30:30 - 31:00 scoring the fifth type of systems that are prohibited by the ACT are those that are used for Real Time remote Biometrics identification for law enforcement um and I've put a bit of a star there because there are quite a few exceptions to this rule we don't necessarily have the time to go through all of those exceptions but it's important for you to know if you're building those type of systems um that there are certain car outs uh for National Security and law enforcement um the six type of systems
            • 31:00 - 31:30 that are prohibited are those that are used to make risk assessments of natural persons to check if they might commit um check the probability of them committing crimes in the future again a a law enforcement use case that's being prohibited the seventh type and I've always found this strange are AI systems that are used to expand or create facial recognition databases um few years ago there's very famous company called clear viw that was
            • 31:30 - 32:00 engaging in those type of practices and it seems like this prohibition is targeted at that company um and those type of practices in particular and then finally systems that use emotion recognition specifically um in the education domain or in workplace settings um are prohibited all right any questions so far about the prohibited AI
            • 32:00 - 32:30 systems um sorry uh this is Martin uh just one question who is um yeah who is identifying those prohibited um AI systems if someone is using that and how is that controlled then so there's there's sort of two instruments primarily you have um under the ACT there will be a new network of Institutions that will be overseeing how companies build and deploy AI products they're called Market surveillance
            • 32:30 - 33:00 authorities they will be responsible for monitoring if companies are in fact using prohibited systems or not um and then there are channels also for individuals or other interested parties to complain or or to bring these type of systems to the attention of those authorities does that answer your question yes and those authorities are just uh or institutes are just going to be built up that's right that's right so chances are
            • 33:00 - 33:30 there will be institutions that exist already and member states are responsible for assigning the responsibility of Market oversight to one of those institutions um and so each country will have its own Market surveillance institution okay MH any other questions
            • 33:30 - 34:00 all right so that was prohibited AI systems we'll move on now to AI systems that are considered highrisk um and there are sort of two ways in which you can be highrisk and we'll take a closer look at both in a minute one is if your system falls under existing product safety legislations and the other is if it's used for a standalone use case and
            • 34:00 - 34:30 we'll take a a closer look at both of those cases um in a minute sorry if I could ask you to keep your microphones off please okay there we go thank you so let's look at the um first case right if the AI system is subject to existing product safety legislation um and there are two there are two um
            • 34:30 - 35:00 requirements that must be fulfilled for a system like this to be considered high risk the first is that the system is a product or a safety component of a product that is regulated by an existing product safety legislation that is listed in Annex one of the AI act and that the system has to undergo a thirdparty Conformity assessment under that legisl um so just as an example right let's say
            • 35:00 - 35:30 you have an AI system that is a medical device also so that is now a product that is governed by the regulation on medical devices it's called the MDR and now if under the medical devices regulation that product has to be certified by a third party then the AI system that you have is considered high risk I think that was a little complicated so
            • 35:30 - 36:00 if there are any other question if there are questions about this feel free to ask them now all right again then to reiterate if the system is a product or a safety component of a product that is already regulated by a legislation that is is listed in annexure one of the AI
            • 36:00 - 36:30 act and under that legislation it must undergo a thirdparty Conformity assessment meaning that some institution has to certify that it is safe the AI system is considered high risk under the AI act any questions about this I'm sorry if I could please ask you to
            • 36:30 - 37:00 turn your microphone off if you're not speaking perfect thank you all right so that's one way in which systems can be high risk the other is if any of the techniques or practices described in annexure 3 right so if an AI system is used as a biometric based system if it is used as a safety component in critical infrastructure if it is used for certain education use cases for
            • 37:00 - 37:30 human resources use cases for law enforcement for border control for private services or or welfare services and in courts and in in certain legislative practices it is considered high risk these are called Standalone use cases um so it doesn't have to be a part of a product that is already governed by a law if the system is intended to be used for any of these purposes listed in Annex 3 it is
            • 37:30 - 38:00 considered high risk there are exceptions so if the system is high risk because it is used in any one of these use cases listed in anex 3 but it is only intended to perform a narrow procedural task or improve the result of a previously completed human activity or it is used to detect decision-making pattern patterns or deviations from decision-making patterns or it is only
            • 38:00 - 38:30 used to perform a preparatory assessment um that is relevant for one of those tasks um it is Exempted from meeting these high-risk obligations we do not yet know the full scope of these exceptions they are defined rather generically um and we have to wait for some guidance from either the EU or from National authorities to understand under what circumstances companies can make
            • 38:30 - 39:00 use of these exceptions all right any questions about how you would identify if if a system was high risk so two parts if it's already governed by a product safety legislation or if it you if it is used for a standalone use case defined in um Annex 3 once again feel free to to raise your
            • 39:00 - 39:30 hand or to turn your mic on we've left quite a lot of time for for discussion so you don't have to worry about time we're happy to take as many questions as necessary okay uh there are no questions about high-risk AI systems we continue to move down the risk pyramid um and we come to limited risk systems um these
            • 39:30 - 40:00 are typically systems to which transparency obligations apply um as we'll see uh a little later in the next session and they primarily apply to when you use general purpose AI systems um which is why many minutes ago I said it was important that when you were conducting an inventory of systems within your control it's important to distinguish between an AI system and a subset which is general purpose AI systems so there are four types of limited risk systems under the AI act
            • 40:00 - 40:30 you have systems that interact directly with natural persons so these are situations where it may falsely appear to a human that they are interacting with another human being when they are in fact interacting with an AI system typical example think about like chatbots right or or uh robot calls um the second type of systems are those that generate synthetic audio image
            • 40:30 - 41:00 video or text content um and so these are all typically what you would think of as generative AI use cases um the third type of systems that are considered limited risk are those that use biometric categorization or emotion recognition that are not prohibited um and then finally if a system is used to generate deep fakes it's also considered a limited risk AI system so we have four types of limited
            • 41:00 - 41:30 risk AI systems all right any questions about limited risk systems um yeah one question indeed sorry uh why are deep fake um AI systems limited risk personally it appears to me at at a kind of a high risk that's a good question there are um freedom of speech and freedom of
            • 41:30 - 42:00 expression considerations um when you're talking about deep fix so sometimes you can generate this type of content for sacal purposes or to criticize um political institutions or Democratic processes and so that's why they're not considered high risk um it is important to remember that the AI Act is not the only law that governs the behavior of AI systems there are also other criminal laws and other digital Market laws that
            • 42:00 - 42:30 govern these type of um activities and so the reason they've made deep fakes limited risk in this content context is because there are certain disclosure obligations that apply so if you are a deployer using um an AI system to generate deep fakes you have the responsibility to disclose that the content was is is fake and you have a responsibility to disclose your own identity um to to certain types of authorities and institutions um so
            • 42:30 - 43:00 that's why does that help okay okay yeah I also have a I also have a question so let's assume we are not building the AI system but we as a company make use of a third party provider for certain tools and especially here like um let's say a chatbot on our homepage because that's exactly what may forcely appear to a user as interacting with one of our colleagues mhm at least if it's good mhm
            • 43:00 - 43:30 um how does that then apply to us because we actually not the AI system provider MH but we the one that offers it on the certain Channel whatever right and what kind of du diligence is then necessary as a user because we are a company that will use systems yeah and maybe even apply those systems in our environment but we're not the ones that provide it per se mhm so we will talk about about obligations in a second um
            • 43:30 - 44:00 the first two systems so where an AI system um directly interacts with natural persons the obligations are only for providers um so for organizations that build the system um and not deploy them um so as a deployer you would not necessarily have any obligations for the first type of system that's also true for the second type of system so where the system is is generating content
            • 44:00 - 44:30 again the obligations are only for the providers of general purpose AI systems and not for deployers um and we'll talk about those obligations also in a few slides later um one important to know that those obligations aren't very extensive they're primarily related to disclosures and watermarking um so there isn't that much engineering that actually goes into it um but secondly I do also want to point point out that if you are using a language
            • 44:30 - 45:00 model um and you are integrating it into a chatbot you are considered a downstream provider of a general purpose AI system so in this case these obligations would apply to you as the provider um and not to the provider of the model um and so it's important to get these roles right thanks for clarification of that because that's exactly what I was aiming for so nice I use a certain Ai and then
            • 45:00 - 45:30 I integrate it somewhere and suddenly I'm a provider not only a user or deployer yes only if you're using a general purpose AI model so what any like llama or or Claude or or GPD 3.5 or 40 or something like that okay thanks you're welcome okay any other questions about prohibited highrisk or limited risk
            • 45:30 - 46:00 systems yes um why is there the distinction between Chi and deep fakes so deep fakes are a subset of generative AI um where the content um purpose to show something that is happening or someone saying something when that thing didn't happen or that person didn't say um what what the image
            • 46:00 - 46:30 was claiming to CL uh to say whereas gen AI is much broader right so if you just generate an image of a flower for example um that's a different use case from generating a video of of Donald Trump saying something does that make sense um yes the the distinction between those two is clear but why um is the distinction in case of the AI
            • 46:30 - 47:00 act also sorry go ahead yes is it how um it's um limited these two use cases or no that's a good question um and we'll come to this again when we talk about the obligations but when it comes to generative AI systems those obligations typically apply to the provider of an AI system and they have to Watermark all
            • 47:00 - 47:30 content that has been generated um whereas with deep fakes the obligation only applies to deployers um and they have the obligation of disclosing that it was in fact um generated um and the only reason it is that way is because the ACT is trying to assign responsibility to the people who are more likely to be obligated to perform a certain type of activity right so you could not possibly
            • 47:30 - 48:00 hold the provider of a general purpose AI System model um liable for all of the content that it creates you'd have to hold the deployer responsible for that kind of content if that makes sense okay yeah thanks for a clarification so just to assign responsibility where it is more appropriate um that's why it's that way and it'll become a little clearer when we talk about about what the obligations are
            • 48:00 - 48:30 also okay again we do have some time so if there are more questions about prohibited um highrisk systems or limited risk systems happy to take them now all right in that case we move on to our final risk class which are low risk R AI systems there are no defined criteria to identify these type of
            • 48:30 - 49:00 systems if your system is not prohibited not high risk and not limited risk it's considered low risk so there are no specific criterias it's just a process of elimination all right any questions about lus systems all right excellent um so just to um remind you where we are and to
            • 49:00 - 49:30 orient you again we spoke about applicability which is is the system an AI system and is the system within the scope of the ACT we're now trying to identify what obligations you might have as part of identifying those obligations we start with saying what risk class does the system belong to is it prohibited high risk limited risk or low risk once you make that determination it is also important to identify your role
            • 49:30 - 50:00 in relation to that system and we're going to talk about this now very briefly right so the act itself defines quite a few roles along the value chain but the two most important roles are providers and deployers so again there is a formal legal definition but in general a provider is somebody Who develops a system or has that system developed and
            • 50:00 - 50:30 puts it on the market um or into service under their own trademark it is important to remember that this designation of provider is a question of fact and not law so you cannot write a contract and say another party is agreeing to be the provider of my system if you have placed it on the market under your trademark then you are the provider of an AI system also important to remember here that if you have vendors for model for data Etc
            • 50:30 - 51:00 they are simply your vendors if you put your own trademark on a system you are the provider of that system irrespective of if you had a vendor for for your models right so that's the provider of an AI system and then next we have the deployer of an AI system this is any entity that uses an AI system under its owner Authority um so very simply a provider is somebody who either sells
            • 51:00 - 51:30 the system on the market or builds it within the company and then uses it within the company and the deployer is any entity that buys an AI system um or uses it under their own authority um another important thing to remember here is that you can be both simultaneously um so if you were a large company like BMW for example if you build an AI system for industrial Automation and you use it within your own factories you are both the provider
            • 51:30 - 52:00 and the deployer of the AI system all right any questions about roles uh I have another question so this provider definition that you just gave with BMW MH why are they a provider I mean they buil it for their own internal use case right mhm so this provider definition is also towards employees not only to outside world yes so there
            • 52:00 - 52:30 are two conditions right you develop a system and then you either place it on the market which is what we commonly understand in in terms of selling it or putting it into service um and if you use it internally that's the definition you'd fall under you would be putting an AI system into service within your own company under your own trademark does that make sense yeah it makes sense the definition of what is the
            • 52:30 - 53:00 development a of an eye system and buying it basically is a bit open for me still but okay it makes sense so the moment you basically have anything developed and give it to anyone to use your provider yes I would say okay so always depends on the specifics but broadly yes all right any other questions about
            • 53:00 - 53:30 roles yes uh I'm not sure uh what this means um when I have a a chatbot only for intern to use and I'm writing um it's powered by IE systems uh for example um then it's not my trademark it's it s it's if I'm then provid or it's uh IE
            • 53:30 - 54:00 systems it would be IE systems it would help if you could if you could provide a little more clarity on that example maybe now only I have an um chat box for example for help for my um um for my colleagues and um right um this chatbot or this um box is powered by a system since it's not my trademark
            • 54:00 - 54:30 then it's it's okay and I'm not the provider that's right you would be the deployer of that system okay thanks all right any other questions about the risk classes or the roles okay in that case we will now try to
            • 54:30 - 55:00 bring this together right you have um the two rows here which indicate your roles so you can be the provider of an AI system um or you can be the deployer of an AI system once again there are more roles defined in the AI act importers Distributors authorized representatives Etc um but for the sake of keeping it simple we're we're leaving some of those roles out um so you have these two roles and you have four risk classes prohibited
            • 55:00 - 55:30 systems highrisk systems limited risk systems and low risk systems um we do also have this question of general purpose AI models um and like I said we're going to discuss it as sort of a a gray area right um and so what obligations do you have right on this slide let's just focus on prohibited systems and lowrisk systems so whether you're the provider
            • 55:30 - 56:00 or the deployer of a prohibited system you have to stop making it you have to take it out of the market and you have to stop operating prohibited systems right there are certain timelines for each of these systems and we'll talk about them um closer to the to the end of the presentation um so that's so whether you're the provider or the deployer of a prohibited system you have to stop if you're building it you have to stop building it if you're using it you have to stop using it let's talk about low risk systems now there are
            • 56:00 - 56:30 like I said no formal obligations for either the provider or the deployer of a lowrisk system there is what the ACT calls a voluntary code of conduct um and so that is going to be a subset of some of the obligations that other systems have to comply with but it is up to you as a company um in terms of actually operationalizing those oblig ations are actually trying to meet them you don't have to um so formally no obligations
            • 56:30 - 57:00 whatsoever if you'd like to there is a voluntary code of conduct that is not available yet the commission and other authorities will will build it up and publish it at some point in the future but if you wanted to you could apply it um any questions about the obligations for providers or deployers of prohibited or low rist systems
            • 57:00 - 57:30 all right then let's take a closer look at high risk and limited risk systems because that's primarily where most of the obligations are um and so let's start with the provider of a highrisk system right um you'll see we've split it up sort of into use case level obligations and organization level obligations important to know that the act itself does not make these distinctions we've done it to make it easier for companies to try and
            • 57:30 - 58:00 understand um when and how they should be meeting certain obligations all that distinction means is that the use case level obligations are obligations that apply to how you design and build Ani system whereas the organization level obligations are more procedural so the kind of documentation you need to have the kind of Conformity assessment procedure you need to follow so it's stuff it's bureaucratic and or procedural stuff you need to do as an
            • 58:00 - 58:30 organization while the use case level obligations are about how you engineer a system in the first place um and so in terms of the organization level obligations there's quite a few here um I've given you the Articles here also so you can take a closer look if you are Building highis Systems but there are three main things I want to highlight right one you need to have what's called a quality management system um this is
            • 58:30 - 59:00 something that large regulated companies are already familiar with but if you're a startup or a smaller company this might be new to you um it is sort of the main basis for how Market surveillance authorities will judge if you've met some of these obligations or not um it is a a big book of documentation proving that you've met all of the obligations that are required of you um the second
            • 59:00 - 59:30 main organization level obligation I want to highlight is performing a Conformity assessment we will talk about the Conformity assessment also a little later but primarily it is about proving to either an authority or self-certifying that you've met the obligations that you need to meet um and then the last one I'd say is there are certain postmarket monitoring obligations also Al so once you've built it and you've either sold it or you're
            • 59:30 - 60:00 using it within your own Enterprise the job doesn't end there you have to continue monitoring the performance of that system um and you also have to make sure that all of its components remain in Conformity with the use case level obligations in terms of the use case level obligations you have to perform what's known as risk management where you have to identify risks you have to evaluate them and you have to mitigate risks you also have to meet certain data
            • 60:00 - 60:30 quality and governance standards in terms of the training and validation and testing data sets that you use you have to maintain certain technical documentation that you have to both give to certain authorities and to Downstream actors like your deployers um you have to log certain events um and these aren't just the typical loog performance or observability logs You' have to think about you have to log events that uh help in identifying situations that lead
            • 60:30 - 61:00 to risk um you have to build the system according to certain transparency requirements and provide certain information to Downstream actors you have to design a system such that it is capable of human oversight and then finally you have to meet certain accuracy robustness and cyber security requirements so as you can see the bulk of the obligations in the AI act are for providers of highrisk systems right let's then take a look at
            • 61:00 - 61:30 providers of limited risk systems right if you remember these two examples one if the system directly interacts with natural persons and two if it generates certain synthetic outputs so if you're the provider of these type of systems in the first case all you have is a disclosure obligation so you just have to inform the person that they inter interacting with an AI system for example a chatbot um and not another
            • 61:30 - 62:00 human if you're the provider of a gen AI system you have to Watermark the outputs um of that system all right any questions about the obligations for providers of high risk or limited risk systems uh I have a question again go ahead so let's assume we would do one of these tools and we don't think that it's
            • 62:00 - 62:30 belonging into either of those categories and maybe we say super low risk MH and we don't meet those so what's the effect no effect at all if your system is low risk no no we we assume that we have a lowrisk system but actually maybe the authorities would assume it's a highrisk system I see and we do not meet all obligations yeah I see so what is risk what what happens then you get fined by the market surveillance
            • 62:30 - 63:00 Authority and is there any indication how these finds will be like dead determin yes you can take a closer look at article um I want to say 97 but I'll have to check but it is a certain percentage of your annual turnover that said there are certain factors that the market surveillance Authority needs to take into account in terms of setting that fine right so if
            • 63:00 - 63:30 you can legitimately show that you did not intentionally misclassify your AI system chances are that they won't drop the hammer very heavily on you um you might get away with a warning you might have to bring the system back into compliance yeah could I mean you know we we would we would use a one available tool and integrate it into one of our tools sudden we actually have we are a provider suddenly because we use it with our own employees but we use a module
            • 63:30 - 64:00 that we actually assumed no problem and then suddenly this module that's not even from us has the risk yeah and then okay but okay this means there's a turnover penalty most likely and then most likely you need to shut it down and so on okay exactly or bring it into Conformity yes of course but that said it sort of goes to show that if you're a a reasonably large Enterprise if you know you have lots of AI systems in your
            • 64:00 - 64:30 Enterprise it is important to perform what's called a risk classification where you identify all the systems in your inventory assign a risk class to them and identify your role in relation to that AI system um because every company is going to have to now whether you're a large company or a medium or or okay there's one more act that is actually playing into the hands of big companies that will have a compliance department for AI the resources to do this unfortunately yes
            • 64:30 - 65:00 um the the act itself does promise some um support for smes and mediumsized companies um but we'll have to see what that support looks like and that includes making it easier to perform risk classifications um making it easier to perform some of these obligations also so if you're a small company a startup um the ACT says that they will reduce the scope of these obligations to make it more manageable for those kind of Enterprises so there are some special Provisions um but we we'll have to see
            • 65:00 - 65:30 what that looks like because the institutions that are responsible for all of this are still being set up uh and they're not ready yet but you're right this is going to be much easier for larger companies to do because they have the resources and money and people to to do this good thanks welcome all right any other questions about the obligation for providers of high risk or limited risk
            • 65:30 - 66:00 systems all right then um we then move on to the deployers of highrisk system right so you have a provider who builds a system sells it or uses it within their own Enterprise and you have deployers typically uh purchase a system of the Shelf um once again we've divided into organization level obligations and use case level obligations again I want to remind you that this is not something the ACT does
            • 66:00 - 66:30 this is something we do to make it conceptually a little clearer in terms of what you have to do so if you look at the use case obligations you have to follow the instructions of use that the provider gives to you you are responsible for assigning a person who will oversee that system in the event that the data and the logging capabilities are under your control you're responsible for those artifacts you have to conduct what's known as a data protection impact assessment um and
            • 66:30 - 67:00 then finally for certain types of Institutions you also have to conduct a fundamental rights impact assessment but this is typically if you're a a public body um or or a private body performing a public function um and so those are the use case level obligations and at the organization level you have this responsibility of monitoring the system right so if there is an in ENT if the system harms somebody's health safety or fundamental rights you have the
            • 67:00 - 67:30 responsibility of informing Market surveillance authorities but also the provider that this incident has occurred and you trigger sort of a a chain reaction of events um you also do have to inform the um affected person so the end user of an AI system that you are using an AI system such a way that the outputs will affect those people um if you have a Workhorse Council you're also responsible for informing them to the
            • 67:30 - 68:00 extent that the AI system affects them in some way so if you introduce AI systems for human resources um and then finally you have this respon General responsibility of cooperating with with Market surveillance and competent authorities if they were to ask for information or or see some type of documentation and then finally for deployers of limited risk systems so the other two systems that we spoke about they're completely disclosure related so if you're using Biometrics based or emotion recognition based systems you
            • 68:00 - 68:30 have to disclose to the end user or to the affected person that this type of system will be used on them and then finally if if you're using a system to generate deep fakes you have to disclose that that content is in fact artificially generated um and not real all right any questions so far about the deployer obligations yes one question mhm um I when you say
            • 68:30 - 69:00 that um well jet gpt's Voice Assistant function now has the ability to to detect emotions so if we have users who use the voice assistant jpt this does fall under under high risk does it uh limited risk limited risk that's right okay so it will be over here oh yeah yeah okay but it's limited risk the voice assistant that's right that's right thank you of course important to remember that emotion recognition in the
            • 69:00 - 69:30 workplace or in education settings is prohibited altogether um so technically you cannot use uh the emotion recognition features um in the workplace or in school for example so it's prohibited okay that's right in the context of the workplace and education systems in other contexts it's limited risk meaning there are disclosure obligations that apply okay thank you
            • 69:30 - 70:00 welcome all right any other questions about the obligations so just as a reminder you have the provider and the deployer and you have four categories of risks if you're the provider or deployer of a prohibited system these systems are banned and they are going to be phased out of the market and you have to stop using them if you're the provider or deployer of a lowrisk system technically no obligations whatsoever just a voluntary code of conduct that you can
            • 70:00 - 70:30 choose to apply if you want to if you are the provider of a highrisk system or limited risk system or the deployer of a highrisk system or limited risk system this is sort of where most of the obligations of the ACT apply um and so as a company this is where you have to be careful and you have to decide if these are systems you want to be providing um or buying if you have the Capac capacity to to meet these obligations I have another R question
            • 70:30 - 71:00 again sorry no of course um so okay this this holds all this holds for everyone that um provides an AI system in the EU what happens because you mentioned okay India and so on is different or maybe the China or the US uh what happens if I provide the system from this country I mean uh with the online tool yeah so um do I need to block
            • 71:00 - 71:30 access from EU to make sure that I'm not a provider um maybe if you could clarify that question a little bit I'm not sure let's assume I build whatever okay because it needs to be a risky thing so let's say a biometric recognition tool that works online whatever software as a service solution I don't know I'm not a super expert there but I provide this kind of thing from the US via the web so it's accessible also
            • 71:30 - 72:00 via the EU yes you are then placing a system on the market in the EU which is why we not make it available by whatever GE location monitoring and say okay this is an IP address coming from the EU block it that's what you would have to do and you see that happening already so Facebook with its latest llama models for example you can't access them if you're in the EU um similar for some of chat gpt's latest um uh voice recognition features and stuff like that
            • 72:00 - 72:30 they don't make it accessible in the EU also true for Apple intelligence um the models where they've said that customers in the EU cannot access them yet because they're still figuring out how to comply sense thanks you're welcome okay then finally in terms of identifying obligations we'll talk about very quickly a gray area um and so what you see on your screen in front of you are obligations for the provider of general purpose AI models so these obligations
            • 72:30 - 73:00 really only apply to you if you're open AI or anthropic or meta or apple so if you are training these models from scratch so we're not going to talk about them here today because by and large they do not apply to most stakeholders but if you were looking for information on on on the slides this is where you would find them what we are going to talk about is what happens if you were to fine-tune or otherwise modify these
            • 73:00 - 73:30 models right so you have llama 3 for example on hugging face or on Facebook's own repository um you access the weights and you say I'm going to fine-tune this model I'm going to customize it for my own use case do I have any obligations um for a long time everyone thought no uh more recently it has become a point of content attention because of two recital you have recital 97 which says general purpose AI models
            • 73:30 - 74:00 may be further modified or fine-tuned into new models um now important to note here that by itself recital 97 does not distinguish between techniques so fine-tuning is a reasonably well-known technique but then it says fine-tuned or further modified and there's there's a large Universe of technique under further modify that Engineers can apply it also does not provide guidance um on
            • 74:00 - 74:30 when modifications result in a new model like does there have to be some compute or financial or engineering threshold that needs to be met um and then finally it also does not clarify value chain relationships right if you fine-tune or modify a model do you become the provider of this new model um that resultes from the customization for that we have to look at recital 109 which says in the case of a
            • 74:30 - 75:00 modification or fine-tuning of the model the obligations for providers of a general purpose AI model should be limited to that modification or fine-tuning um many different problems with with this recital Chief among them that it just happens to assume that if you find tune or modify it you are in fact the provider um and then secondly it does seemingly create an obligation which is is that your obligations extend to the extent of your
            • 75:00 - 75:30 modification um but that's all we know about this so far um there's still it's a bit of a gray area we're just flagging it to to to our clients and customers and partners um because it's good to know that this might be a problem in the future if you're fine-tuning or customizing models because quite a few Enterprises are nowadays um and that we do have to wait for some regulator certainty um around this regime okay any questions about this
            • 75:30 - 76:00 gray area so is there a timeline or something like that when we can uh expect certainty no unfortunately not any other questions all right again important to clarify here that these inter rations are not set in stone um there is uncertainty here um and so you do have
            • 76:00 - 76:30 to sort of keep an eye out on on how these questions of law develop all right then um to summarize all of the obligations your obligations depend on the risk class of your system and your your role the risk class is defined entirely by the intended purpose of the system and not its inherent capabilities Your Role is a question of fact meaning if it's your trademark or if you're using it under your Authority you're the
            • 76:30 - 77:00 provider or deployer um respectively some of the obligations relate to how you design and build a system or use it While others relate to procedural or bureaucratic obligations that an organization might have and then finally fine-tuning a general purpose AI model is still a bit of a gray area um and hopefully we will have more certainty um on what all of this means soon okay before we move on to the next
            • 77:00 - 77:30 section maybe because we've been going on for a while now it is 3:20 um so let's take maybe a 5 minute break and meet back at 325 a thumbs up if that works for all of you maybe okay perfect see you in five minutes
            • 77:30 - 78:00 e
            • 78:00 - 78:30 e
            • 78:30 - 79:00 e
            • 79:00 - 79:30 e
            • 79:30 - 80:00 e
            • 80:00 - 80:30 e
            • 80:30 - 81:00 e
            • 81:00 - 81:30 e e
            • 81:30 - 82:00 e
            • 82:00 - 82:30 okay I hope you're all coming back now
            • 82:30 - 83:00 once again if you could give me a a
            • 83:00 - 83:30 thumbs up or some sort of reaction to indicate that you're there okay I see some hands going up all right excellent um then we continue along the compliance Journey um once again just to recap we started with checking applicability which is where you ask the question is the system I'm using an AI system um or a general purpose AI system and B if my system is
            • 83:30 - 84:00 in the scope of the ACT if you say yes to those two questions you try and identify your obligations your obligations depend on the risk class of the system so is it prohibited high risk limited risk or low risk and your role are you the provider of that system or are you the deployer of that system and by by and large the obligations are most extensive for the providers of highrisk systems and to a
            • 84:00 - 84:30 lesser extent the deployer of highrisk systems The Limited risk obligations as you've seen are substantially lighter primarily relating to disclosures and um water marking and the obligations for prohibited systems are stopped uh building them or using them and there are no obligations at all for um limited risk systems and then finally in terms of general purpose AI models um we've learned that fine-tuning them or
            • 84:30 - 85:00 modifying them in some other way is still a bit of a gray area and if if that is what you're doing it's something you should keep a close eye on for future guidance from either the commission the AI office or your National Authority so that was checking applicability and identifying obligations now you said I've identified my obligations how do I meet them um once again this section from here on out is now mostly relevant to Providers of
            • 85:00 - 85:30 high-risk systems um but before that important for all organizations to start with AI literacy um this obligation comes from Article 4 of the AI act and recital 20 it is important to remember that AI literacy isn't just about upskilling your employees such that they understand in general what the limitations and benefits and opportunities of AI systems are it means
            • 85:30 - 86:00 that the responsible individuals and teams are upskilled such that they understand their obligations um uh relevant to a given AI system right so if you are the provider of a high-risk system it is important to impart training to project teams that are building that system and inform them of all of the obligations that apply at the use case level as an Enterprise if you're provide providing highrisk systems it is important to upscale your
            • 86:00 - 86:30 compliance and legal teams so that they understand what obligations you have um at an organization level and similarly for for low risk systems as well um also true if you're deploying a highis system it is important that the people who are overseeing that system or the team that is responsible for that system understands what all of the deployer obligations are so it is a reasonably extensive obligation that applies to all companies on an ongoing uh
            • 86:30 - 87:00 basis right uh so that was just a quick note on AI literacy now in terms of providing high-risk systems themselves um we saw that there are quite a few use case level obligations right risk management data quality and governance transparency event logging human oversight um robustness cyber security and accuracy the act itself merely sets out some normative standards so all it does is says company should have a risk
            • 87:00 - 87:30 management process company should um meet certain data quality and governance standards company should ensure an adequate level of accuracy robustness and cyber security but the act itself does not tell you how you can meet those requirements for that you have technical standards these technical standards are going going to be the basis um for your presumption of Conformity so if a company is building a high-risk AI
            • 87:30 - 88:00 system and applies these technical standards um Market surveillance authorities and other institutions will assume that you have built it in Conformity with the AI act who's responsible for these standards it's an instit two institutions called sen and senc they're very prominent European Standard setting organizations they were tasked by the commission earlier in May of 2023 to develop these standards and we
            • 88:00 - 88:30 expect that these standards will become available sometime in April 2025 um that's still a tentative deadline standard setting is is a very technically and organizationally complex activity um and and the time scales unfortunately for the AI act are very compressed typically they take many years to build standards but if all goes well we should see at least a first draft um sometime early next
            • 88:30 - 89:00 year all right um second thing in terms of how you would meet these obligations it is very important also to keep an eye on your vendors right if you are the provider of an AI system maybe you're getting your model from somewhere else maybe you have a data vendor maybe you're developing and hosting and serving this model on a cloud service
            • 89:00 - 89:30 platform like Azure or AWS it is possible that some of these vendors May either have to meet the high-risk obligations themselves or give you sufficient information as the provider to meet those obligations yourself um and article 25 and the cyles 88 to 90 obligate you as the provider and all of your vendors to have written contracts spelling out what this Arrangement is going to be in terms of who is
            • 89:30 - 90:00 fulfilling uh obligations ultimately though it is going to be the provider of an AI system who is held accountable for the systems Conformity um or non-conformity okay any questions about meeting obligations once again feel free to raise your hand or just turn your mic
            • 90:00 - 90:30 on okay then just to summarize regardless of your risk class en role if you're building or using AI systems you have to train your employees to a certain standard make them AI literate um in terms of meeting the high risk requirements themselves as as the provider you have to apply certain technical standards that will become available early next year if you're lucky um and
            • 90:30 - 91:00 finally if you're relying on a lot of vendors or maybe you're a vendor yourself and many of your clients are building high-risk AI systems it is important that they either meet those obligations and those technical standards themselves or provide sufficient information to you as a provider of an AI system such that you can build it in Conformity with the AI requirements all right you've met all of
            • 91:00 - 91:30 your obligations youve found the standards that are applicable to you and you've met them um how do you go around proving that you built a compliant product um so two things you need to know first like I said many many uh minutes ago that there is going to be a very complex network of Institutions that are responsible for monitoring and over we seeing AI systems um this is a nice little infographic from the International Association of privacy
            • 91:30 - 92:00 professionals all of the institutions that are named in blue are primarily going to be responsible for overseeing general purpose AI models so by and large they're going to be more relevant to actors like open AI anthropic um meta Apple Etc all of the institutions in red are going to be more relevant for Enterprises operating within a certain jurisdiction U the most important
            • 92:00 - 92:30 authorities here to know of are the notified bodies this is who you will have to go to to prove compliance um and Market surveillance authorities who are going to be responsible for fining you for investigating you for making sure that you are in fact meeting your compliance requirements um classifying your systems correctly um Etc in terms of the Conformity assessment procedure itself there are
            • 92:30 - 93:00 two ways you can do this the first is via internal control meaning you do not necessarily have to go to a notified body you just have to prepare documentation proving that you did it so that if a market surveillance Authority investigates you you have the proof available to you um ignore the first row for a second there so all of the systems that
            • 93:00 - 93:30 are high risk because of anex 3 meaning they Standalone use cases so AI systems used in education AI systems used in Human Resources like recruiting um AI systems used as safety components in critical infrastructure for all of those systems by and large you can do it via internal control meaning you don't have to go to an authority you just have to prepare a quality management system um and make sure it is available to a market surveillance
            • 93:30 - 94:00 Authority if they ask for it all of the high-risk systems um that are already part of a product safety legislation so AI in autonomous vehicles AI in medical devices AI in industrial robotics all of those systems you have to go to a notified body they have to approve um of your system first so you have to show them all of the documentation you have to show them all of your test reports and they will
            • 94:00 - 94:30 certify that it is in fact compliant um and finally you have biometric based systems that have a bit of an exceptional characteristics so if you have a biometric based system and you've applied all of the harmonized standards that are relevant to those type of systems you can do it VI internal control if you haven't applied those standards or those standards aren't available yet you have to do it via a
            • 94:30 - 95:00 notified body okay any questions about demonstrating compliance or or the Conformity assessment process once again feel free to raise your hand or unmute yourself
            • 95:00 - 95:30 okay then um as a quick summary notified bodies might assess your compliance or Market surveillance authorities expect guidance and oversight from a wide range of Institutions so once all of these authorities have been notified and we know who who they are should make sure to visit their website once in a while to keep up with what guidance might be applicable to you as an Enterprise or or this systems you're developing um and then finally how you go about proving your system is compliant or the Conformity assessment
            • 95:30 - 96:00 process depends on the type of system you actually have was there a question okay then we move on to the last phase of the compliance Journey which is you've identified your obligations you've met them why are the application of technical standards you've proved to the relevant
            • 96:00 - 96:30 authorities that you've met those obligations and now your systems on the market right what do you have to do um four things primarily one is that you are obligated to set up a postmarket monitoring plan where you have to keep an eye on the performance of the AI system and more specifically on its conformity so you have to make sure that all of those highrisk requirements that we spoke about earlier that they continue to be met during the operation
            • 96:30 - 97:00 of the system itself um second in the event of serious incidents this is a defined uh phrase typically it means if there was some um harm to health safety fundamental rights there is a reporting process that you have to abide by both as the provider and deployer of the system so you have a reporting obligation towards each other as providers and deployers and you also have a reporting obligation to Market
            • 97:00 - 97:30 surveillance authorities once all of those reports have been made you also have an obligation to take corrective action so you might have to stop the system withdraw it or bring it into Conformity depending on the type of the incident article 25 is also super important there are situations in which the deployer of an AI system might be considered the provider of that system there three situations that have been outlined in article 25 if you apply your
            • 97:30 - 98:00 um trademark to an AI system if you turn what was previously a lowrisk system and you use it for a high-risk purpose um or if you change the intended purpose of a highrisk system um in such a way that it remains high risk so there are certain situations in which even if you're the deployer of a system if you perform certain types of activities the ACT
            • 98:00 - 98:30 considers that you are the provider and then you have to meet all of the provider obligations um and so this is something Enterprises should keep a close eye on and then finally article 7 probably not as relevant in the short term but eventually the commission and various EU institutions and member states will review the AI act and they might also introduce new classification so they might introduce new systems to the highrisk category to the prohibited category to The Limited risk category
            • 98:30 - 99:00 and that's something you should keep um keep an eye on um second thing about maintaining compliance unfortunately the AI Act is just one piece of legislation um you do have to keep your eye on other General legislations like product liability rules or data protection rules you also have to comply with any sectoral regulations you may have the Machinery directive or the medical devices regulation there are other
            • 99:00 - 99:30 digital economy legislations out there already prominently the data Act and the Cyber Security Act um and then more relevant to the larger companies are the digital markets Act and the Digital Services act um and there are potentially new AI related liability laws also um although that is now sort of in the back burner and there revisiting those those type of rules so important to remember the AA is just one component of unfortunately a much larger
            • 99:30 - 100:00 digital economy um uh legislation package and then the last thing I will say in terms of maintaining compliance is that you do have to keep an eye out on timelines so the AI act entered into force in August 2024 and its rules for certain systems become applicable in a phased manner so for prohibited systems the deadline is
            • 100:00 - 100:30 February 2025 so right around the corner um if you believe you are building AI prohibited AI systems or using them as deployers it is important to start phasing those systems out already the rules for general purpose AI models which are not very relevant to most Enterprises kick in next year in August so August 2025 this is also when you have to be more careful about your fine-tuning practices or modification
            • 100:30 - 101:00 practices the rules for limited risk systems kick in August 2026 so 2 years from now the rules for Annex 3 high-risk systems so these are the ones that are in specific areas like human resources education Etc start August 2026 and and then the rules for highrisk systems that are already governed by another product safety legislation kick in around August
            • 101:00 - 101:30 2027 um so you have a lot more time for that and these dates are pretty important a in terms of if you're if you're making a system available on the market after that you have to make sure that it's it's built a specification but there are also certain types of systems if you place them on the market before these dates so long as you do not substantially modify them you don't have to meet the obligations right so if you're building
            • 101:30 - 102:00 a highrisk system used in recruiting um and you place it on the market tomorrow um you do not have to meet the high risk requirements unless you substantially modify that system after August 2026 these rules are all in article 111 and there's quite a few exceptions and and conditions that apply so if you think this is something that applies to you um
            • 102:00 - 102:30 take a look at that all right any questions about maintaining compliance once again feel free to raise your hands all right then um just to summarize the section then your job isn't once the product is you have to continue monitoring operations um remember other laws apply as well and
            • 102:30 - 103:00 one of the challenges that many companies and actors in Europe are going to face over the coming years is to make sure that all of the pieces of those puzzles fit together and that they don't work against each other and then finally keep an eye on timelines as well if you're building highrisk or limited risk systems you do have some time for compliance um but just make sure that you keep an eye on on what those timelines are as part of your project management practices all right this basically
            • 103:00 - 103:30 brings us to the end of the presentation just some um some FYI on everything we didn't manage to cover right very detailed classification rules for the classifications of AI systems I very at a very high level showed to you the types of systems that are prohibited highrisk and limited risk but each of them once again have more conditions and exceptions that are associated with them um and that was obviously we wouldn't have had the time to go through all of
            • 103:30 - 104:00 those conditions finally we didn't really discuss in full the requirements for highrisk systems again I showed them to you at a very high level but they're quite detailed and and frankly quite complex both from a legal and an engineering perspective um next we didn't do a very big deep dive into all of the roles so providers Downstream providers deployers importers Etc um there are many other obligations that also come with these roles that are more process oriented we didn't discuss in
            • 104:00 - 104:30 great detail the institutional Arrangement and which notified body all of you might have to go to um and finally we didn't really flesh out the full process for Conformity assessment and so the reason we've highlighted them here is if you believe your high risk or limited risk these are some of the questions you need to take a a deeper look at and are beyond the scope of an introductory presentation to the AI
            • 104:30 - 105:00 act all right that brings us to the end um we have 15ish more minutes um so if you have any questions I'm happy to stick around and answer them um and if you don't thank you so much for for joining us today uh do you intend to build an AI that explains all of this to us uh one
            • 105:00 - 105:30 day thanks thank you so much thanks a lot have a nice one thank you you do thank you bye thanks have a nice day thank you bye bye byebye goodbye byebye thank you very much
            • 105:30 - 106:00 welcome thanks bu byebye