ISO/IEC 42001:2023 Unveiled

What is the AI Management System Standard ISO/IEC 42001:2023?

Estimated read time: 1:20

    Learn to use AI like a Pro

    Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo
    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo

    Summary

    ISO/IEC 42001:2023 is a pivotal standard focusing on AI management systems. Presented by CSIRO's Data61 and discussed in a webinar with Standards Australia, it emphasizes the effective governance of AI applications. The standard, recognized internationally and certifiable, is designed to ensure that AI systems are developed, implemented, and managed responsibly, promoting trustworthiness, transparency, and accountability. It is acknowledged as a cornerstone in AI governance, aligning with both global standards and local regulatory frameworks to unify efforts towards responsible AI usage.

      Highlights

      • ISO 42001 provides a framework for responsible AI development and management! πŸ› οΈ
      • Organizations globally are adopting this standard to enhance AI trustworthiness! 🌐
      • The AI management system standard is certifiable, adding a layer of assurance! πŸ†
      • It aligns with major international efforts like EU's AI Act to harmonize standards! πŸ‡ͺπŸ‡Ί
      • A comprehensive guide for implementing AI governance throughout its lifecycle! πŸ“ˆ

      Key Takeaways

      • ISO 42001 is the new gold standard for AI management, ensuring responsible AI practices! πŸ€–
      • It's certifiable, meaning organizations can get a badge of trust for AI governance! πŸ…
      • ISO 42001 aligns with international efforts like EU and US frameworks to harmonize AI practices globally! 🌍
      • The Standard provides a comprehensive framework for AI lifecycle management, from development to implementation! πŸ”
      • It's all about fostering Justified Trust in AI systemsβ€”making AI safer, smarter, and more reliable! πŸ”’

      Overview

      ISO/IEC 42001:2023 is a game-changer in the world of AI standards. It offers a certifiable framework that organizations can adopt to ensure AI systems are safe, transparent, and accountable. This standard is part of a global effort to harmonize AI governance, making it a crucial tool for nations and businesses aiming to implement responsible AI practices.

        In the webinar, experts discussed how ISO 42001 bridges the gap between various international frameworks, like those from the EU and US, fostering an environment of unified regulatory guidance. With 64 countries participating in its development, the standard reflects a global consensus on managing AI risks and enhancing trust.

          The document emphasizes an all-encompassing approach, detailing policies, risk management, and impact assessment techniques crucial for aligning AI practices with ethical and legal expectations. It's not just for big players; even smaller firms looking at AI can benefit enormously by integrating these standards into their operations, future-proofing themselves against evolving regulatory landscapes.

            Chapters

            • 00:00 - 03:00: Introduction to AI Management System Standard ISO/IEC 42001:2023 Beth Warl from the National AI Center opens the webinar by welcoming participants. The session involves Standards Australia and global experts to discuss the AI Management System Standard ISO/IEC 42001:2023. She begins with an acknowledgment of the traditional lands.
            • 03:00 - 06:00: National AI Center Overview The chapter begins with a respectful acknowledgment of the Aboriginal and Torres Strait Islander people, honoring the traditional custodians of the land on which the speaker lives and works in South Australia. The speaker emphasizes the privilege of living on such culturally rich land and greets all participants of the call, especially recognizing the indigenous attendees.
            • 06:00 - 09:00: Launch of Australia's AI Sprint and Webinar Content Overview The chapter discusses the establishment and purpose of Australia's National AI Center. It was set up with the support of the Department of Industry, Science and Resources, along with foundational partners Google and CEDA. The center's primary focus is on promoting growth in the AI sector.
            • 09:00 - 12:00: Introduction to ISO/IEC 42001 and Standards Australia In the chapter 'Introduction to ISO/IEC 42001 and Standards Australia,' the focus is on Australia's AI ecosystem and initiatives aimed at supporting small and medium-sized businesses in accessing and understanding artificial intelligence. The chapter highlights a particular program, the Responsible AI Network, which collaborates with key partners, including Standards Australia, to advance and enhance Australia's AI capabilities. The chapter acknowledges the contributions of these partners in driving and uplifting the country's AI landscape.
            • 12:00 - 15:00: Overview of AI Standards and ISO/IEC 42001 Key Features The chapter titled 'Overview of AI Standards and ISO/IEC 42001 Key Features' focuses on fostering collaboration with organizations interested in Australia's AI responsible practices. It encourages entities to connect, join their community, and subscribe to the National AI Center's updates. The chapter mentions regular events and emphasizes upcoming quality content, along with available recorded materials.
            • 15:00 - 18:00: International Collaboration and Certification in AI Standards The chapter discusses the importance of international collaboration and certification in developing AI standards. It highlights the availability of webinars as a resource for understanding artificial intelligence and the need for ethical and responsible usage. Additionally, it mentions the launch of Australia's AI Sprint, encouraging organizations and startups involved in AI solutions to participate.
            • 18:00 - 21:00: Policies, People, and Processes in AI Risk Management The chapter begins with the announcement of online resources and prizes available for a program aimed at developing AI solutions. The program is highlighted as a valuable resource and a catalyst for organizations exploring AI development. The section ends with a handover to speakers from Standards Australia.
            • 21:00 - 23:30: Implementation and Compliance of AI Management Standards The chapter focuses on the "Implementation and Compliance of AI Management Standards" and features a discussion led by SAR Singer from the Strategic Initiatives team at S Australia. The chapter highlights the excitement around the release of an AI management standards publication, which has been much-anticipated and discussed for nine months in previous webinars. The publication is seen as a crucial part of the AI framework, now ready to be utilized in the market. The chapter sets up further exploration of key factors related to AI management standards.
            • 23:30 - 26:30: Details on AI Management System Standards and Objectives The chapter discusses the importance and relevance of a particular standard related to AI Management System Standards and Objectives. It starts with an acknowledgment of the traditional landowners, the Gadigal people of the Eora Nation, in Sydney, Australia. The chapter mentions that there will be a presentation followed by a Q&A session, encouraging participants to ask questions at any time.
            • 26:30 - 30:30: Role of AI Standards in Global Regulation and Market Compliance The chapter discusses the development of a 2-hour e-learning training module focused on AI standards, created in partnership with the Australian National University. It includes involvement from a team who will gather questions and might distribute an FAQ document if needed. There is also a mention of a QR code that directs to an expression of interest page for this training module, emphasizing the importance of engaging with such educational resources for understanding global AI standards, regulation, and market compliance.
            • 30:30 - 37:00: Q&A Session on AI Management Standards and Implementation The chapter titled 'Q&A Session on AI Management Standards and Implementation' discusses a new educational program by e-learn focused on providing an in-depth understanding and practical application of AI management system standards. This initiative is seen as an exciting venture ready for market launch, expected to be available for learners by the end of next month. The chapter introduces four speakers who will delve into these subjects.

            What is the AI Management System Standard ISO/IEC 42001:2023? Transcription

            • 00:00 - 00:30 good morning everyone my name is Beth warl and on behalf of the national AI center it's my privilege to welcome you today to today's webinar with standards Australia and some Global leading experts in the AI management system standard so before I start today I want to First acknowledge the lands that we meet upon today as the traditional lands
            • 00:30 - 01:00 of the Aboriginal people and Tate Islanders I want to acknowledge and welcome everyone um to the call today and particularly those who are indigenous um ariginal and Terr Islander people I have the privilege of living and working on Ghana land down here in South Australia and it's just a wonderful space to to grow up and live um and I want to acknowledge all of the people that are joining the call
            • 01:00 - 01:30 today before we start today I wanted to just very quickly talk about the national AI Center we were established a few years ago now with support from the Department of Industry science and resources we have two key foundational Partners Google and Ceda and we're very uh um grateful for their support and continuing involvement in our operations we exist to uh do a couple of key things so we uh focus on growing
            • 01:30 - 02:00 Australia's AI ecosystem and we also work to support small and mediumsized businesses access and learn more about AI uh the particular program that I run is the responsible AI Network and we have uh a number of key partners with whom we work including standards Australia and um we are very grateful for their contributions to really dve driving and uplifting Australia's AI
            • 02:00 - 02:30 practice um we are very interested in hearing from organizations who might be interested in uh work working with us on Australia's AI responsible AI practices please get in touch with us equally please join our community and please make sure that you subscribe to the National AI Center for our email updates we run these events all the time and we have excellent content coming up for you as as well as all of our recorded
            • 02:30 - 03:00 webinars online so it's a fantastic resource for those interested in artificial intelligence and using these tools in a responsible ethical way I want to quickly acknowledge that we have recently uh launched Australia's AI Sprint so for any organizations out there or perhaps you know organizations that are developing AI um uh Solutions or our startups looking at this space please make sure that you find um
            • 03:00 - 03:30 further information online about this program there's a heap of resources and prizes up for grabs and it's uh going to be a fantastic resource and again an excellent Catalyst for organizations that are looking at developing Solutions in AI space so with that I will hand over to Sarah and the standards Australia speakers thank you hi hi thank you very much Beth um
            • 03:30 - 04:00 I'm SAR singer I lead our critical and emerging Tech work here in the Strategic initiatives team at s Australia uh and I'd like to thank you all for joining the webinar today we're really thrilled to finally have this standed out we've been talking about it in these webinars for probably nine months now so it's very exciting to have such an essential publication as part of the AI framework release and in the market and ready for use um today we're going to go through some of the key key factors that make
            • 04:00 - 04:30 this standard so relevant and so uh informative and before we go much further I'd just like to acknowledge the land in which s Australia meets today which is the gadigal people of the E Aur Nation um they are the traditional land owners of this land here in Sydney um following the the presentation today from our four speakers there will be a chance for Q&A so please drop any questions in the chat as you go don't worry about whether they're relevant to
            • 04:30 - 05:00 the topic at the time we have a team that will pull them all together and we will try to get to them if there's a lot of questions and we don't get to them we can pull together an FAQ document afterwards which will distribute in addition you might see a little uh barcode down the bottom corner with a QR um please scan that and it'll be taking you to an expressions of Interest page for a 2hour e-learn training module that we've developed in partnership with the Australian National University University and their school
            • 05:00 - 05:30 of cybernetics that e-learn is specifically dedicated to giving you a very deep dive and practical interpretation of the AI management system standards so it is a very exciting piece of work that we're we're really proud to P into market and you should be able to access that by the end of next month and start learning more about the standard now next slide please uh our four speakers today are
            • 05:30 - 06:00 oraly jaquette uh Dr Kobe lines harm Ellens and Professor Lia Bennett Moses we're really thrilled to have all of them I'll give you a very brief biog of each one of them just before we start uh but before we do that we're going to have a a quick poll so how familiar are you with the AI management system standard there should be a box popping up any minute now um please participate and we will pull together the results and let you know
            • 06:00 - 06:30 throughout the session our first Speaker today is Ary jaquette who I hope most of you know if you have any inclination of what's happening in Australia with AI she is very much at the the spear head of that work she's the director of ethical AI Consulting and a very much a leading figure in the Australian context in terms of AI work she also chairs standards Australia committee standards Australia's committee on artificial
            • 06:30 - 07:00 intelligence representing Australia at ISO and IEC she is also a co-chair of the first accredited Global certification program for AI developed by the responsible AI Institute and the uh World economic Forum so over to you arle think you're a mute AR starting well it's it's a pleasure to be here
            • 07:00 - 07:30 thank you everyone um I think we had a few session on AI standards so I'm just going to do a very generic introduction um or to remind um what the ISO standards are and what's um how aims uh 42001 fits in the broader picture and then I'll pass on to uh my talented fellow presenter um so what are the iso International standard on AI if uh you've not been in our session um you
            • 07:30 - 08:00 will know that standards are pretty much um what we explained is that standard fits in pretty much everything we do they're fairly invisible in the car that you drive in the building um that are being built so they are part of everyday life um and then make sure um quality products and services are are built and delivered so what our work is with the committee and the experts that on the call is we working on International
            • 08:00 - 08:30 standard the iso standard on AI so we ensuring that um through the standard that are being develop that AI is is built in a safe and responsible manner um these are a few numbers on the screen that shows that um we have been quite active uh since 2018 there's 25 standards that have been published um 31 that's in development so you see um no time for the rested so we have plenty to
            • 08:30 - 09:00 keep us busy currently there's 64 countries involved so very much Global initiatives uh with over 150 expert from all around the world um but what we're here to talk about um today is very much that one special standard um I tend to say it's the crown of the draw of all the standards it's the AI management system standard and why are we here to talk about it it's because it's actually um enabling the
            • 09:00 - 09:30 certification um of organiz of organization using AI system so if we go to the next slide I'll explain a little bit more about that point next slide please thank you no no too fast um so how standards can foster Justified trust so obviously AI is an emerging technology um we have been uh building
            • 09:30 - 10:00 uh repeatable process um from a technical perspective but also from a governance perspective within the standards and how we enable certification is through ISO 4201 the standard or AI management system standard so if you've got one number to remember from today is 40201 and why because it is the first International Management System standard for the safe
            • 10:00 - 10:30 and reliable development and imple implementation of AI so it's really here to help business and organization develop a robust um AI governance framework and the partically important piece about that standard that's a second fact to really remember and we're going to hammer home today um is it's the only uh AI management standard that is certifiable so if you have come across uh AI management standards that
            • 10:30 - 11:00 are widely used and particularly um important uh in the day-to-day running of organization I'll refer you to um 920001 um on quality management or 27,1 on information security these are uh intrinsic part of how organization uh work and ensure security of the information and quality management within an organization so 42,000 one is the same for a Ai and you see um there
            • 11:00 - 11:30 was a a release of um the uh from in the UK of what um a paper on AI assurance and um an explanation of how AI Assurance work and I um they they made a really good point on how effectively 4201 can really help gain Justified trust so next slide now thank you
            • 11:30 - 12:00 so as I said ISO 4201 is enabling certification that's the key differentiator um it is the only um standard that specify requirements through the HEI life cycle for an organization to manage um the AI system responsibly so it is about managing risk it is about focusing on trustworthiness
            • 12:00 - 12:30 transparency and accountable and and accountability so very much implementing all those AI ethics principle that we've been discussing in the past um what's important to know it's a the AI management system standard is a map it's a building block on the work we've done on data model and organizational governance next um obviously one of the important aspect of the AI management system
            • 12:30 - 13:00 standard um it's that it's interoperable and there are it's creating a lot of attention you see in the US nist already did a road map uh um as part of their road map on the AI management uh framework they actually map to um the AI management system standard and this is um an extract from it so you actually see how um the the two uh processes are
            • 13:00 - 13:30 very much align so again that's showing um how um adopting is a AI management system standard is allowing to build block and allowing interoperability next similarly um in the EU you'll see that the way the eua ACT is um there's a legislation um that's been passed but it's relying on standard on European Standard and
            • 13:30 - 14:00 effectively um the EU commission has been part of our work since very beginning in 201 18 next slide you um under the UI act everyone knows there's u a c marking that's um so the certification of AI system um is one of the tools of the artillery of the eui
            • 14:00 - 14:30 act and this is explaining how this um certification will be delivered um but the one important thing to remember from that how how does the certification of the EU link to ISO next slide sorry um so uh it's um I jumped a little bit um so the way it links to the two links together is effectively
            • 14:30 - 15:00 there's an agreement that allow us to work very closely together between the European standards body and um the the work that we lead at ISO so there's an agreement there where um we proactively work together to make sure that the ISO standards are informing the E European Standard so which lead me finally to this uh last slide why should organization implement uh
            • 15:00 - 15:30 40201 um it's very much because the this ISO standard allow the development and use of AI systems are trustworthy transparent and accountable it's allowing allowing organization to implement those um principle because we are delivering through those the 4201 robust reputable process that I recognized internationally so on that note I'll pass to uh my colleague har um
            • 15:30 - 16:00 to give you a bit more um information and depth into what how 4201 Works within its ecosystem thanks Ary uh we're gonna jump to Kobe next actually sorry um yeah all good um Kobe is an honorary senior fellow at Kings college and she provides us with expert advice on the as part of a member of Standards Australia's AI standards committee she's co-founder of the I E's responsible innovation of AI
            • 16:00 - 16:30 and life science sciences and she's an an Advisory Board member of multiple other AI initiatives over to you Kobe thanks sah and thanks to everyone for having us here to have this talk it's it's unusual to see other people in the room because the four of us have spent so much time looking at each other on screens throughout Co writing these standards so it feels a little bit like a party to be able to come to you with some information about this standard finally and I'm just going to riff really quick um off the information from the poll
            • 16:30 - 17:00 because I think it's really a really good place for me to start talking which is my angle is going to be a little bit more from risk which is obviously the most exciting and interesting part of the standard um no not really we all have interesting Parts but it's really interesting that nearly six% have purchased the standard on the call and I suspect this is already a self- selecting group because if you're attending this meeting you're probably already interested in how to implement but that's that number of people who have said that they purchased the standard is actually fairly indicative of a lot of of businesses so at the moment a recent MIT study said that
            • 17:00 - 17:30 something like 90% of companies think they need to do something to to govern AI they know that they need to be moving in this space about 6% have policies and I'm going to talk about policies people and processes and through a risk lens today but before I do that I also wanted to situate this conversation about standards a little bit more um an interesting statistic that I came across recently uh reading Nancy lon's Book on systems engineering mentioned the fact that the largest loss of life in New
            • 17:30 - 18:00 York prior to uh the September 11 incident was actually boats being Catching Fire and people drowning on boats in New York it was a massive issue and the trouble was that you had steamboats that used wood and there were no regulations about how these boats were used and people didn't know how to swim largely at that time so what happened was that a group of experts or a group of people who were concerned said we need to have some standards around this and so that this boats build building started to be standardized the
            • 18:00 - 18:30 death started to reduce and as or's already touched upon the idea of Standards is often the beginning of where regulation follows I always talk about sort of cobbling the cobblestones of where regulation will follow but also really interestingly International uh in the iso context standards can also be local in order to respond to a particular problem so although no boats are on fire in the case of AI what we have seen are a lot of cases where things have gone wrong and a lot of cases in the headlines just this week we had a a californ a a Canadian flight
            • 18:30 - 19:00 company where a chatbot provided false information they were held to be liable every week there are increasing uh amounts of litigation and I think we're going to see more and more of those so although everyone thinks they should have policies finally the the standard has arrived and they are starting to but the policies are not enough and I think a lot of you might be joining this call even if you don't know a lot about standards going well so what there's this new 4201 document what does it actually mean the first step that companies will do is that they will purchase the standard hopefully and then
            • 19:00 - 19:30 they will use the standard but even doing that is not enough the policies that you're going to implement internally in your company should ideally ref reflect a board risk appetite and flow down into your company okay so you've got the policies and this is where a number of companies sit right now a number of companies have had data ethics policies for a long time and again a lot of research shows that even with data ethics policies and AI Frameworks it's not really enough to manage AI systems what this standard does is provide a framework not just for
            • 19:30 - 20:00 the policies but also the people and the processes and that's what I really wanted to dig into today the processes are the hard bit the processes are where you embed actual systems to review AI that are consistent um ideally linked with your other risk management processes and that also are looking at risk in a holistic way and I'll get to that a little bit in a little bit more detail when I talk about the people but when we're talking about risk we're not just talking about the litigation risk we're not just talking about customer harm risk we're actually talking about
            • 20:00 - 20:30 broader risks and My slogan around this is always that data ethics and AI governance is actually a set menu not a smallest sport you don't get to choose which things you do and don't include what you do is have a corporate risk appetite and within that you're going to need to consider lots of different kinds of risks they might be unintended they might be um using data in ways that people wouldn't expect they might be having outputs that are going to affect people negatively and we've seen a lot of those already in the public sphere but thinking about that from a process
            • 20:30 - 21:00 perspective how do you do an AI impact assessment how do you capture that risk consistently how do you actually make sure that you're getting a view that is actually aligned with your company's if you're a small or a large company with your particular positioning and then importantly if you're a global company or if you're thinking globally this is a global landscape so if you want to engage you're going to need to be thinking about these things not just for what's best practice in Australia we don't tend to lead in this space what it is going to be is be thinking about what's best practice elsewhere and what others is going to expect and so we're going to see an uplift I think in the in
            • 21:00 - 21:30 the increase of maturity and the last piece is the people and it's the piece that's most often overlooked or's already touched on the fact that this standard requires certification you're going to need people who are qualified to do the audits you're going to need people who are qualified to do the AI impact assessments but you're also going to need a fundamental people change within companies where people are able to call out risks and this is also in the standard to be able to say look you know I'm a data scientist I don't have access to the Sea Suite but I've got a
            • 21:30 - 22:00 concern I've got a concern that like in the case of the Volkswagen case you know this car is doing a particular thing when it's being watched but not when it's not and this is going to have effect so we're going to need to have a culture of people where people can actually raise concerns and be participating in ways that are not happening right now that's a really really short and very um an incredibly brief overview of the risk aspect of aims but I'm really excited to hand over to my colleagues and um continue this conversation I really look forward to having your questions
            • 22:00 - 22:30 thanks thanks Kobe some really great points made there we're um we're going to quickly have a poll uh so again same function you'll find it in the Box on the webcast on the right hand side uh How likely is it that your organization is going to implement something like an AI management system standard that will run throughout this presentation so please use that next we're going to pass over to harm Ellens um harm is uh the director of advisory services
            • 22:30 - 23:00 at virtual Inc in Australia he possesses over two decades of AI experience specializing in technical ethical and business facets he co-chairs the iso I standards committee and working group three uh on trustworthiness and is uh generally looking at overarching goals in fostering responsible ethical sustainable and inclusive AI princip uh practices for stakeholders it is with a
            • 23:00 - 23:30 great privilege to welcome harm to the conversation thank you sah um yes let's um go to the first slide so um we've um talked at a high level about the the standard and um we're going to do a little bit of a a deep uh well let's call it the shallow dive we're going to do a deep dive next week and I'm sure U you'll get an invitation for that um shortly um I'd
            • 23:30 - 24:00 like like Kobe did uh first of all acknowledge the wide group of Australian experts that have made a contribution to this so the four of us uh certainly um have been uh involved but we had a large group of experts from industry from Academia and from government very very actively participate and there are whole sections that are in the standards that wouldn't have been there if it wasn't for the advocacy of the Australian
            • 24:00 - 24:30 experts so I really really want to um express my gratitude for their voluntary contribution into all of this so the management standard is a horizontal one which is sort of another word for maybe generic or neutral U it can be applied across all Industries and it can be applied to a very very very wide variety of AI applications but already we see that certain industries like financial services are actually looking to make uh
            • 24:30 - 25:00 domain specific uh variations or uh additions to to the standard so um this is in effect the first um of what is expected to be a series of management systems for uh AI now as we mentioned before from about July onwards we expect that uh Conformity assessments will be um become available so as an organization you can have your AI
            • 25:00 - 25:30 management system certified and as or Lee pointed out um if you operate your systems in Europe um this Conformity assessment might actually become mandatory for certain use cases um we mentioned Financial Services chatbots they are definitely within the scope of Conformity assessment um as is facial recognition for example and then uh finally um I think Sarah will touch on this uh a little bit
            • 25:30 - 26:00 later this standard has been identically adopted in Australia as as 420001 um and to get yourself a copy go to the standard Australia website uh next slide please so um we touched on this um already and it's it's worthwhile um sort of looking at 4201 as being informed form Med by a a number of other AI
            • 26:00 - 26:30 standards and and guidelines um but one of the uh the first uh things to consider when we uh develop uh and we go about uh implementing the AI management system standard is to Define uh an AI policy and the objectives for the use of the AI system and um information about that can be found in uh an additional standard uh that relates to the
            • 26:30 - 27:00 government's implications of the use of AI and um at a high level it requests uh organizations to uh refine their their policies particularly when it comes to Ai and to set clear objectives for the whole organization as what it is that the company seeks to achieve uh and use that as a touchstone to the subsequent work that's being undertaken um it also
            • 27:00 - 27:30 U makes some recommendations around the resourcing uh both for the AI management system itself and for uh the requirements of Staff in the organization now we've touched on this a couple of times um already um we talk about um risk management um a lot and there's a separate standard that helps organizations identify which standards sorry which risks are specific
            • 27:30 - 28:00 to Ai and to further help you inform about the uh thing that makes an AI system different from a general it system there is a free standard uh 22989 that uh can give you an insight into um what the management system um expects you to manage from an AI ecosystem perspective and equally important it
            • 28:00 - 28:30 assigns roles to uh organizations so you might be an AI producer or you might be an AI service provider or you might be an AI uh consumer as an organization uh there are roles for individual AI users and the AI management system has to take these roles into consideration when it comes to selecting and forming policies
            • 28:30 - 29:00 and procedures so uh in the uh box on the right hand corner you see uh that should have been another standard uh and we might touch on that in a second ignore that for the moment um so let's go to the next slide orally touched in her introduction on the fact that these standards are
            • 29:00 - 29:30 formed as an almost as an ecosystem which means that uh many of the uh various other ISO standards inform how these Management Systems operate and are constructed and uh if you are a professional risk manager you will be very very familiar with the 31,000 series of risk management and when you uh review 4201 you will see
            • 29:30 - 30:00 uh a great deal of familiarity there similarly if you are from uh a general quality management background or you are a information security expert or you are familiar with privacy regulations you will find that all of those standards follow a similar approach and that the approach uh that for 420001 followers will literally feel very very familiar
            • 30:00 - 30:30 we go to the next slide so bringing all of that together um I just want to highlight here a a new standard that um you can already um download uh and purchase from the iso website and that is ISO 4205 at the AI impact assessment and the reason why this I call this standard out
            • 30:30 - 31:00 here is risk management tends to be somewhat inward focused for organizations it's the risk 2D organization that typically uh takes precedent in many many discussions so over the years um additional practices have emerged that particularly focus on the literally the impact that a system might have on the end users those that
            • 31:00 - 31:30 are affected by the use of that particular system and 4201 will provide uh deep guidance on how to perform an impact assessment uh and the impact assessment is a Cornerstone within 4201 now the next couple of slides uh I'll skip over briefly because we will go and do a deep dive on them next week
            • 31:30 - 32:00 but if we go to the next slide um here are the major Clauses out of the standard itself so um it requires organizations to define the context in which they operate um leadership commitment a commitment to developing policies and defining roles and responsibilities plan for risk for impacts for Change and achieving the objetive Ives to make resources
            • 32:00 - 32:30 competence awareness possible within an organization communicate and document the management system itself and uh how it then is uh frequently reviewed and improved next slide we talked about uh objectives before and as I said we'll go into a deep dive around these various objectives um next week but you will see
            • 32:30 - 33:00 a number of objectives here that might look fairly generic uh but then there are also some very very AI specific elements in here such as transparency and explainability next slide here are some of the uh AI specific risk sources and again we will touch on them next week and uh we will go into uh more detail as to how to
            • 33:00 - 33:30 identify them how to assess them and how to manage them final slide and Kobe touched on the role and importance of processes right and the the way that we actually sort of instantiate these processes in in artifacts are through control objectives and again um the this uh information is contained in the standard itself so it
            • 33:30 - 34:00 provides some uh initial guidance to an organization that goes to implement the the system and uh there are some uh very detailed uh suggestions on how organizations might go about it uh and we finally wrap up with policies because this is the part that uh many many uh end users in
            • 34:00 - 34:30 particular will eventually uh be confronted with and when you um look at uh these uh some of these may already exist like AI policies um so they have to probably be modified in order to conform to the expectations of the management systems um others you may have to go and set up from scratch so with that I'll hand it back to sah who will then hand it over to
            • 34:30 - 35:00 Lia thanks very much Han uh and our final speaker for today last but not least uh Lia Bennett Moses whose um personality is larger than life as is her her um her biography um Professor Lia is director at unw Allen's hub for technology law and Innovation she's the associate Dean in the faculty of Law and justice and she's made significant contributions to the development of AI standards through her
            • 35:00 - 35:30 involvement with standards Australia and the i e Laria over to you thank you um so if we could go over to the first slide this is going to take a little bit of a zoom up um so we've gone a lot into the details of the standards but what I wanted to do is I guess explain where this sits in all the things because ultimately if you're an organization there's going to be many sources of influence as to what good looks like for artificial intelligence lots of
            • 35:30 - 36:00 different organizations government and non-government coming up with their own description and formulation so starting off with government at the moment most of what we're seeing is sticks sermons and carrots um so if we're thinking about the sticks that's ultimately what law is um so we've got these are sort of rules you know in statutes and and regulations that sit under those statutes where organizations have to comply with those those rules or there some kind of consequence um and we can
            • 36:00 - 36:30 divide that into two types um on the one hand there's the proposal that government is working on now to have a law that will provide rules specific to high-risk artificial intelligence that's still in development and we haven't seen exactly what that looks like yet but there's some discussion um from the minister in terms of the direction that's going but very much what we do know is that's going to be focused on high-risk use cases at the same time we also have a whole bunch of other law that applies
            • 36:30 - 37:00 that doesn't mention artificial intelligence but clearly applies to it this includes laws around things like discrimination um the Privacy Act um as well as sort of General law that applies to artificial intelligence as much as it applies to anything else so negligence law in Tau for example is applicable even if AI is involved and contract's another example of that um as well a good recent example actually is um Air Canada's chatbot I don't know if people
            • 37:00 - 37:30 have followed that story but where Air Canada ran a chatbot on its website the chatbot gave incorrect information to a customer about whether a refund would be available in the particular circumstance and ultimately was held to what the chatbot said um in terms of its obligation to that customer so that kind of law remains right um and organizations will have to continue to comply with it at the same time the government also does um what I might call sermons right it tells
            • 37:30 - 38:00 organizations what they ought to be doing but in ways that don't come with a stick so Australia's AI ethics principles are an example of that um in addition it can offer carrots so for example it can use things like its own procurement policy to say we are only going to procure particular categories of goods and services that meet particular needs so organizations that follow that prescription will have an extra customer in government whereas organizations that don't will miss those
            • 38:00 - 38:30 procurement opportunities so government has all of those kinds of tools at its disposal um at the same time government isn't the only player here um telling people what good looks like so standards really sit in that non-government example right standards are coming from the iso or from other organizations um and what they are telling what they are doing is they are saying this is what we think you should do in the context of AI if you are
            • 38:30 - 39:00 designing or using um AI right so so that is is sort of another Universal source of advice um now they vary in terms of various things so some standards like um 4201 um is a management system standard right it's basically giving organizations guidance as to the policies they need to write the processes they need to put in place to do AI well but there are other standards that can provide a lot more technical information on use cases and so forth so
            • 39:00 - 39:30 there's a whole universe of Standards there um ethical principles is really sermons coming from elsewhere right there are hundreds of lists of ethical principles that have been put out there in the universe and they're not only by governments they're by organizations of academics industry bodies various coagulations of these things that have also put out various kinds of guidance and it goes beyond that and I've used a sort of very broad heading of requirements because requirements come from many different places um they can
            • 39:30 - 40:00 come from um entities that have set up markets so if for example um Apple's um App Store says we only accept apps that meet particular requirements then if you want to be part of that market you have to comply with their terms for entry um but customers will have requirements um far more broadly um insurers may have requirements associated with particular insurance products so that if you want to get insurance in you know a particular kind of insurance you might have to meet their requirements and
            • 40:00 - 40:30 requirements might come from things like being a member of an industry Association so requirements can come for a whole bunch of different places um but the I guess the the core point of all of this is that standards are just one thing in this broader universe but having said that they have a really important role to play and I'm going to come back to that after a very brief interlude um next slide please and the interlude is about definitions because when I say everyone has a different Vision as to what good
            • 40:30 - 41:00 looks like for artificial intelligence everyone also has a different meaning that they ascribe to the word artificial intelligence so in the standards Universe in in um in ISO standards you can find the definition of artificial intelligence in 22989 which is sort of the basic conceptual standard um that sits behind all of the AI standards and I've put its definition there on the screen screen now that was actually put in place to largely mirror the original oecd
            • 41:00 - 41:30 definition um fairly closely um and you can sort of see the oecd definition below um but the oecd definition has itself been amended since 22989 was put out and you'll see a sort of um sort of chronology here mirroring M much more closely the definition that is now the EU definition in its AI Act so lots of different meanings um now in a sense the
            • 41:30 - 42:00 definition becomes most critical when you're talking about law right so when the EU puts a definition in its AI act that definition matters in a really fundamental way because you're either in or out of the ACT depending on whe whether you're in or out of that definition um which is whether the sticks will affect you or not um whereas in standards they can afford to be a lot looser in terms of definition because all because standards are ultimately voluntary so organizations can interpret the the the definition given in the
            • 42:00 - 42:30 context of their own applications and decide where that standard is appropriate but the EU definition and I just thought I'd point this out is particularly interesting because it ultimately comes down to one word so if you delete everything in that definition that's optional so varying levels of autonomy um outputs such as um May exhibit you take out the optional element and you look at the core everything hinges on the word infers
            • 42:30 - 43:00 ultimately AI systems are systems that can do this verb inferring which I think creates a whole bunch of problems because I'm not sure that's a very clear way of distinguishing computer systems from each other um because anyway many reasons and maybe for a different occasion but just to note that important thing that everyone's talking about obviously overlapping Concepts but slightly different things so moving to the next slide okay so I just wanted to highlight
            • 43:00 - 43:30 I suppose through an example looking at reliability and safety how these different things might play out right so if you look at for example Australia's AI ethics principles you can see there's a principle around reliability and safety throughout their life cycle AI systems should reliably operate in accordance with their intended purposes um and then there is some detail which generally refers to things like proportionate safety measures monitoring and testing risk management and clear allocation of responsibility now in a
            • 43:30 - 44:00 sense that's all good but it doesn't really tell you the how or what particular things you need to put in place so if you go to 4201 you can see a little bit more on the how right it starts with this idea that organizations actually have a policy and there's some guidance as to the kinds of things that need to be in that policy in terms of allocation of responsibility it discusses assigning roles right with particular responsibilities um in terms of risk management it goes into some detail on
            • 44:00 - 44:30 what that looks like and how it links with impact assessment um there's also a lot of um guidance on those kinds of support that need to be given for all of these things to happen and a kind of life cycle approach of evaluation and Improvement which is flagged in the ethics principles but there's a lot more detail in the standard there's also then in addition to sort of the the the sort of front end of the standard appendices that set out the controls and these provide a lot more detail including
            • 44:30 - 45:00 implementation guidance around those elements right so around things like roles and responsibilities around the different kinds of resources required including data and tooling requirements around impact assessment how you manage that with a system life cycle and data um around mechanisms to report record and act on concerns and adverse impacts so that if you like it FS in the kind of um high level principles you can find in
            • 45:00 - 45:30 Australia's AI ethics principles and indeed ethics principles generally I think have that fairly vague quality with a lot more kind of like this is what we actually need to do or these are the kinds of steps we need to consider taking now law sits if you like in parallel with all of that and some of the stuff in I mean the the standards are not going to tell you how to comply with the law they can't they're International standards and the law and each jurisdiction is different so law remains fundamentally important however
            • 45:30 - 46:00 they are related so for example if you comply with the standards first of all you're less likely to end up in litigation around some kind of problem but also in the context of litigation depending on the nature of that litigation so big complex question but compliance with the standards will certainly be relevant in many cases um in terms of um protecting the organization against liability so standards have a role with respect to law but they're not the same thing as the separate obligation to comply with
            • 46:00 - 46:30 law so that's kind of My overall very high level view um and happy to take questions thank you very much thank you very much Lia so we are going to move straight to some Q&A um we've had some really great questions from the audience but so we're actually going to start there um Kim worthington's asked because the standard is voluntary what Drive is there for organization to adopt it also how countries can remain globally
            • 46:30 - 47:00 competitive with upholding or regulating AI when they have different approaches I.E the EU versus the US uh regulatory Frameworks so I will throw that to our presenters and see if someone wants to put their hand up and if if we have any takers if not um I will throw ory over to you maybe I'll start with um the second part of the question um how can our country are remaining globally
            • 47:00 - 47:30 competitive when regulating AI um and while there's an impression that this fragmentation um standards are building Bridge um you see nist as recognized at 4,01 and the work that ISO is doing is a key pieces so is it effectively um mapping to to that work equally the EU act they say the way they use um and leverage the work that we do is to
            • 47:30 - 48:00 ensure interoperability so that that question of interoperability is very front of mine um and as I said because we have we have all those um 56 countries that are involv and ensuring there's a a coordination process um so that's the one piece to keep in mind that the the work that we do help build those bridges effectively um or um after that um everyone can understand once they
            • 48:00 - 48:30 have those bridges the common pieces to adapt to the local u values or the local requirements but at least it gives a common determinator um to promote Innovation and enable um to scale AI responsibly great thanks Lia I can see your hands also up thank you so I thought I'd come in on this um so what mainly on maybe the first part of the question um what Drive is there for organizations to adopt it I mean ultimately organizations have to do
            • 48:30 - 49:00 good governance right they have to consider all the issues and think about the impact of their products this is merely a if you like it's almost like a checklist of things to consider as to how to do that well so by having that checklist organizations can actually make choices so the standard does have a measure of choices organizations can look at a particular control for example and decide that for various reasons that control doesn't apply to their product um but it nevertheless they've thought about it they've addressed it they've
            • 49:00 - 49:30 documented why they're not doing that thing um and and pro so it's a useful tool if you like rather than a a mandatory thing the second thing I wanted to um sort of address is the sort of um the sort of fact that what you end up getting from different governments is these kinds of different Regulatory regimes and I think standards can have a really important role to play depending on how government plays that so what standards what what governments have an option to do um when they're formulating
            • 49:30 - 50:00 their own sort of jurisdictions legal requirements is to um reference standards as a means of complying with various legal requirements so this is a government Choice um but governments can for example say a very high level statement um in legislation and then in the guidance documents point to standards as one way for organizations to provide evidence that they have um complied with that legal requirement and in that context where that is done um
            • 50:00 - 50:30 standard there's obviously an additional incentive for organizations to comply with those standards but even if it is not done explicitly it is nevertheless often a useful means to demonstrate to a regulator that you have taken the kinds of things that legislation deals with seriously but standards cannot themselves um be sort of an alternative to complying with legal obligations just to be clear obviously um organ ganizations um have to comply with legal obligations and standards don't don't
            • 50:30 - 51:00 themselves internally deal directly with the law of any jurisdiction great thanks Lia so we have another question from Sarah which is are you aware of any more detailed guidance SL processes being worked on for the impact assessment um we're working on an AI impact assessment currently framework and she's curious to hear how other people are approaching that but anyone one from our panel like to take that
            • 51:00 - 51:30 question if happy to field it if no one else will so I think I mean the question on everyone's lips is how do you do impact assessments and what does this actually mean so again we've talked at a really high level about this standard this is a parent standard and there are baby standards that are coming under it and one of those baby standards actually has very explicit instructions on how to do some of these um the impact assessment models there are many available and I'm sure Sarah's aware aware of those that different places have offered up
            • 51:30 - 52:00 different questions but from my personal experience and doing this professionally I find that the questions and the checklists are not enough again it comes back to that processes and people that you need to have the right people in the room at the right time the thing that I would also say about the impact assessments is that if there a standalone tack on assessment at the end of a long line of other assessments that you have in your organization that's problematic it takes too long it's consequential you know don't have the right people in the room the reason I love Ai and the reason I work in this space is that it's complicated and it's
            • 52:00 - 52:30 hard and you have to actually work across disciplines to have conversations together and so impact Assessments in an Ideal World I think the companies that are going to pull ahead are the ones that are going to roll the assessments together and be able to do them in a much faster documented more folsome way because they are going to be able to use the tools more quickly and be able to use them safely and that is really the key here I don't think many companies have got this right but the companies that do will win at this game and it's all about the governance if you can govern these things effectively through impact assessments that are rolled into
            • 52:30 - 53:00 your other impact assessments that are documented and auditable which the standard is going to require you can then use the tool safely and avoid the litigation or the risks and the the sticks and stones that Lia was referring to I love your your model Lia that's yeah fantastic but yes definitely look out for that standard and there will be need to be greater expertise in this area there's definitely a shortage of the skill set in Australia at the moment I me Kobe while I've got you uh there is a question related to that which talks to governance but more the board risk appetite um Paul from the audience has
            • 53:00 - 53:30 asked what does that look like and are you seeing fear or excitement from boards in terms of adoption of anything like this yes yes is the answer Paul yes there is fear and yes there is excitement um more generally I think boards tend to be a little more cautious because they're personally and severally liable so they tend to be asking a lot more meaningful questions the enthusiasm tends to come from execs who want to show that they're hitting and using AI because all of the news reports are saying that their neighbors are doing it already and so there's a huge fear of
            • 53:30 - 54:00 missing out what I think is really important from this piece is not actually whatever the risk appetite is that's set although that's also clearly significant for the impact assessments but what it does is it sets the tone for an organization in a way that you should ideally be then capturing whatever that risk appetite is in kpis for executives so that they then have a performance requirement in addition to the impact assessments um those business owners of those tools when they're reviewed will need to be accepting a certain kind of risk so puts the owners back on those individuals to do that so again there's sort of a a structural people piece to
            • 54:00 - 54:30 the the idea of approaching it this way there's not 100% agreement across the board that you need a risk appetite it's a personal view of mine but it's very helpful I think because you need the top down and the bottom up for this to work because it's an organizational wide change in behaviors that you really need to see so it's quite a big piece of work to do AI governance properly yeah and it's a good point because a lot of a lot of these organizations then want to go on to say that they're certified to have that badge and say look we we're doing the right thing um Kush from the audience has asked about how does any
            • 54:30 - 55:00 organization become a certifying Authority for the management standard and what what are the pathways for Conformity assessment at the moment around ases um I'm gonna throw to you har thank you sah and thank you kush for that question um Kobe mentioned the baby standards uh that sit underneath the parent standard and one of the baby standards that is being developed at the moment is 42,00 0 6 which basically outlines the requirements for U organizations uh and
            • 55:00 - 55:30 already um uh standardization uh certification organizations are preparing um examinations for that so this is happening in Canada this is happening in Germany and um there will be uh exams for Auditors and Lead Auditors as well as exams for implementers and lead implementers and um this is uh broadly
            • 55:30 - 56:00 expected to become available uh in Australia sometime around the July time frame uh in addition ISO is working on a handbook that will help organizations and Auditors actually construct the management system and how to perform an audit so there's a lot of work going on in that space thanks H and um we've got I think I think this will be our final question just about maybe we might have time for one more let's see um claudus asked
            • 56:00 - 56:30 about the evolving standards like ISO 4201 focusing on management uh what are the other presenters views on um you know how standards might influence the development of human- centered Ai and you know will that have an impact on shaping legal Frameworks to better align AI Technologies with human values and ethics or I think from the the ethics and governance perspective I'll throw to you
            • 56:30 - 57:00 to kick this off thank you so human are always at the Forefront there's a one that are um using Ai and um developing AI so the focus is always on that human perspective um and obviously uh when we look at this technology we have very much in mind the impacts that AI system can have on individual and Society at large and that's why um just toor at
            • 57:00 - 57:30 home one of the central piece is um the impact like where we focus we have a standards dedicated to impact on um identifying impact and managing and um mitigating those this back on individual group of individual in society so that's very very front of mind um I'll um I'm actually going to pass to H he's been working um on a very important piece when we look at uh
            • 57:30 - 58:00 impact on society like he's he been leading the the work on sustainability ni so H um I'll let you um talk to this one thank you yeah and at the risk of turning this into a number soup uh we uh we are working on a technical report uh that uh basically uh documents the environmental sustainability aspects of AI and I've been the the project editor um for that and we're going um through an international commenting process at
            • 58:00 - 58:30 the moment um that has uh provided a great deal of positive feedback and we expect to publish that report towards the end of the year the other project that's going on actively and if anybody's interested in participating in that um reach out to me or to sah um it is a technical specification that provides guidance on addressing societal concerns and ethical concern considerations for artificial intelligence system it's been a very very hard fought project to get off the
            • 58:30 - 59:00 ground there are many uh interests that uh would prefer this not to um go ahead but we've managed to gain International consensus that this project should go forward and um it will put some formal guard rails in place we use the word guard rails a lot but not much of it is actually been formalized and this project will formalize guard rails to um address societal concerns and ethical considerations in AI systems so come and join me and other Australian experts if
            • 59:00 - 59:30 you think that you want to help put those G rails up but in international context when with that what an excellent way to wrap up our panel today um thank you very much to all of our fantastic speakers and uh everyone who joined us today it was a really great session just again um we are launching our training course which will be specifically looking at the AI management system standard in depth with the Australian
            • 59:30 - 60:00 National University and the school of cybernetics so equally please come join that course and learn a bit more and I will throw to Beth um to close out the session but thank you from sard Australia thank you so much and thank you to all of those people who joined us today we've had a fantastic response to today's webinar we will make sure that this recording is available soon on our YouTube channels we'll be also sharing some follow-up information for all of
            • 60:00 - 60:30 those who have registered today please again remember to subscribe to the National AI Center for more access to informative exciting engaging webinars such as this I want to also thank all of the speakers today certainly um my hands are almost bleeding with the amount of responses that I had to type into the chat such was the um uh enthusiasm and um engagement that we had from the audience so thank you again um and look
            • 60:30 - 61:00 forward to seeing you on our next webinar thank you everyone