Researching AI and Health Law in an International Context
Estimated read time: 1:20
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.
Summary
In this engaging webinar recorded from Washington DC, titled "Researching AI and Health Law in an International Context," experts gather to discuss the intersection of AI, health law, and international standards. Co-chaired by Yulen Hofferberg and Heather Casey from the American Society of Laws International Legal Research Institute Group (ILRIG), the session delves into the complexities embodied in international law, particularly focusing on healthcare's evolving nature with the advent of AI. Renowned experts, including Dr. Jenny Guestley, Dr. Barry Solomon, Glenn Cohen, Vera Lucia Raposo, and Cynthia Ho, contribute insights on AI's regulatory landscape, legal challenges, and international policies shaping the healthcare world.
Highlights
Hosted by the American Society of Laws ILRIG, the webinar sets out to explore global AI and health law 🌎
Dr. Jenny Guestley leads an impressive panel including renowned experts from around the world 🌍
AI's everyday presence in health, from diagnostics to personalized medicine, is spotlighted 🏥
Legal uncertainties around AI, such as liability and informed consent, are debated 🌐
Vera Luchia Raposo discusses European AI laws and their implications 🚀
Cynthia Ho examines intellectual property issues related to AI and healthcare 🔍
Key Takeaways
AI's role in healthcare is rapidly evolving, presenting both challenges and opportunities globally 🌍
Regulating AI requires collaboration and can benefit from shared international standards 🌐
The legal landscape for AI in healthcare is complex and multifaceted, needing ongoing adaptation and research 📚
Ethical guidelines are crucial in shaping AI's application within healthcare 🍏
Healthcare systems worldwide must balance innovation with patient safety when integrating AI 🤖
Overview
The webinar kicks off in Washington DC, featuring a diverse panel of legal and medical experts shedding light on the international dynamics of AI in health law. With AI becoming an everyday reality in healthcare, this panel tackles its integration, challenges, and regulatory status across the globe.
Dr. Barry Solomon initiates the discussion with an enlightening overview of AI's role in healthcare, emphasizing cross-border collaborations and the impending EU AI Act as a regulatory landmark. Glenn Cohen examines AI's legal implications, particularly focusing on liability and informed consent, illustrating a complex legal fabric.
Vera Lucia Raposo brings a European perspective, discussing facial recognition technology and its regulations. Cynthia Ho rounds off the panel by delving into IP rights within the AI domain, pointing out current challenges and potential solutions for the global healthcare landscape.
Chapters
00:00 - 02:00: Introduction and Moderator Welcome The chapter 'Introduction and Moderator Welcome' begins with a warm greeting from the moderator, Yulen Hofferberg, based in Washington DC. Acknowledging the diverse locations of the panelists, including those across the Atlantic, the moderator sets a welcoming tone for the webinar focused on the intersection of AI and health law. The introduction establishes the foundation for the ensuing discussions and engagements with the panelists.
02:00 - 09:00: AI and Health Law: An International Perspective The chapter 'AI and Health Law: An International Perspective' discusses the international context of AI in health law, primarily focusing on the efforts of the American Society of Laws International Legal Research Institute Group (ILRIG). Co-chaired by the narrator and Heather Casey, ILRIG prioritizes professional development in foreign comparative international law (FCIL). The institute offers a platform for discussion among legal professionals, scholars, and attorneys to exchange knowledge and enhance research capabilities in FCIL.
11:00 - 19:00: Overview and Development of the Research Landscape In the chapter titled 'Overview and Development of the Research Landscape,' the focus is on the resources, methods, techniques, and best practices integral to legal research. The chapter highlights the organization's activities in this domain, such as organizing presentations, publishing a newsletter, and maintaining an informative website to showcase the latest advancements in the legal research profession. Members emphasize the importance of considering interdisciplinary and multicultural aspects in contemporary foreign, comparative, and international law. They assert that strong research foundations are crucial for the formation of global legal policies and norms. The chapter also introduces ILRI's role as a platform for discussing unique and analytical aspects relevant to the American Society of International Law (ASIL).
19:00 - 28:00: Key Legal Issues in AI and Healthcare In this chapter, a webinar is introduced that highlights the intersecting fields of artificial intelligence (AI) and healthcare. The focus is on the legal challenges and considerations that arise as AI technologies continue to develop and influence the healthcare sector. Dr. Jenny Guestley, a seasoned expert in foreign law, serves as the moderator. She has previously been a co-chair for ILRIG and is currently a senior foreign law specialist at the law library of Congress. Her extensive legal research includes work on countries such as Germany, Switzerland, and Austria.
28:00 - 31:00: Medical Device Regulations and Global Perspectives Chapter titled "Medical Device Regulations and Global Perspectives" discusses various entities like Lienstein in the Netherlands, the European Union, and U.S. governmental branches including Congress, executive agencies, and the judiciary. It highlights their role in research assistance to the public. The chapter also introduces Dr. Guestley, who is proficient in German and French and holds advanced degrees in law including a LLM, a JD equivalent, and a PhD, with her dissertation covering financial market supervision. Her work was recognized with the Baker McKenzie Award in 2015.
31:00 - 41:00: Research Challenges and Future Directions The chapter titled 'Research Challenges and Future Directions' discusses the prestigious Mckenzie Award, which is annually awarded to authors of exceptional dissertations in the field of commercial law. One particular dissertation, recognized with this award, has been published as a book. Additionally, the chapter introduces Dr. Jessie Gesley, an esteemed professional who is admitted to the New York State Bar and also qualified to practice law in Germany. The chapter sets the stage for Dr. Gesley's role as a moderator, welcoming participants from both sides of the Atlantic and addressing potential challenges and future prospects in the realm of commercial law research.
43:00 - 57:00: Liability and Tort Law in Medical AI The chapter delves into the intersection of liability and tort law within the context of medical AI. It begins with a reflection on the rapid advancement and growing ubiquity of AI technologies in everyday life, setting the stage for an exploration of how these technologies are impacting the field of health law on a global scale. This includes examining the roles and responsibilities of different stakeholders when AI systems are used in healthcare settings, and how legal frameworks are evolving to address challenges associated with this technology. Key themes include the potential for AI to address global challenges in healthcare and the legal implications of its integration into medical practice.
57:00 - 70:00: Informed Consent in the Age of AI The chapter explores the role of AI in health law and medicine, highlighting its applications in efficient resource allocation, administrative streamlining, diagnostic improvements, personalized treatment, early disease detection, and medicinal product design.
72:00 - 86:00: Facial Recognition in Healthcare The chapter discusses the challenges faced by the healthcare sector in the context of AI, particularly focusing on access to health data and liability for harm caused by defective AI products. It mentions that regulators globally are evaluating whether existing legal frameworks are adequate or if new legislation is needed to address these challenges. The chapter specifically highlights that the European Union was the first to adopt a comprehensive AI regulatory framework with its AI Act.
88:00 - 106:00: Intellectual Property Issues in AI and Health Law The chapter titled "Intellectual Property Issues in AI and Health Law" discusses the EU's regulatory approach to AI, particularly focusing on AI-based software used for medical purposes, which is classified as a high-risk AI system due to its applications and potential impact. This classification entails specific legal and regulatory requirements. Additionally, the chapter touches on the EU's efforts to establish a liability framework for AI and mentions the creation of a European health data space, aimed at facilitating the secondary use of electronic health data for research and innovation.
111:00 - 114:00: Audience Q&A: Global Integration of AI Laws The chapter discusses the importance of reliable data for AI training, with a focus on the sensitivity of health data. It emphasizes the necessity of implementing safeguards to protect such data. Additionally, it mentions the Council of Europe's framework convention on AI, human rights, democracy, and the rule of law, set to open for signature on September 5th, 2024. This treaty is contrasted with the EU AI Act.
114:00 - 121:00: Concluding Remarks The chapter discusses the AI treaty from the Council of Europe, emphasizing it as a framework convention that outlines broad commitments for its parties, with specifics left to national legislators. It highlights the diverse legal landscape AI in healthcare touches, pointing out the absence of a single legal framework. It raises the question of whether new legal frameworks are required for AI in healthcare, or how AI should be integrated into existing laws.
Researching AI and Health Law in an International Context Transcription
00:00 - 00:30 Good morning from Washington DC and good afternoon to some of our panelists on the other side of the Atlantic. I am Yulen Hofferberg and I have the honor to welcome you to today's webinar researching AI and health law in an
00:30 - 01:00 international context put together by the American Society of Laws International Legal Research Institute Group Ilrig that I co-chair together with Heather Casey. Elrig is dedicated primarily to its members professional development in the areas of foreign comparative international law FCIL. Elrich provides a forum for discussion among legal informationational uh professionals, legal scholars and attorneys. ILRIG enhances its members opportunity to share their knowledge about available FCL researcher uh
01:00 - 01:30 resources, research methods, research techniques and best practices. To that end, we organize presentations, publish a newsletter, and maintain a website that reflects the most recent developments of the legal research profession. Our members are particularly mindful of the interdisciplinary and multicultural aspects of contemporary foreign and comparative and international law and believe that global legal policy and norms cannot exist without strong foundations built on exhaustive research. Ilri is committed to being a forum for discussing ASIL's unique and analytic
01:30 - 02:00 needs and today's webinar serves that purpose in particular with the developing field of artificial intelligence and how it interacts with law. It is my great pleasure to introduce today's moderator, Dr. Jenny Guestley, who previously served as a co-chair for ILRIG up until just last month. Dr. Guestley is a senior foreign law specialist at and acting chief of foreign comparative and international law division 2 at the law library of Congress. She conducts legal research on Germany, Switzerland, Austria,
02:00 - 02:30 Lienstein, the Netherlands, and the European Union from members of Congress, executive branch or agencies, and the federal judiciary, and provides research assistance to general public. She's fluent in German and French, and Dr. Guestley holds a LLM from University of Minnesota Law School, a JD equivalent from the Goa University of Frankfurt, Germany, and a PhD in law. Her PhD dissertation on financial market supervision in the United States national developments international standards in German was awarded the Baker Mckenzie Award in 2015. The Baker
02:30 - 03:00 Mckenzie Award is an annual prize given to the authors of exceptional dissertations related to commercial law. The dissertation has been published as a book. Dr. Guestley is admitted to the New York State Bar and is qualified to practice law in Germany. Welcome Dr. Jessie. Thank you, Elen, uh, for that kind introduction. Um, and yeah, good morning everyone, or good afternoon if you're joining us from the other side of the Atlantic. My name is Jenny Gesley, as Elen just said, and I will be moderating
03:00 - 03:30 today's discussion on researching AI and health law in an international context. So, let me first give you a brief introduction to the topic and then I will introduce you to our excellent panel of speakers. To me, it seems like not so long ago, artificial intelligence or AI was a thing of the future. You know, something that might someday help us address global challenges. Well, that day is here now. It seems like we encounter AI every day around us. Be it when we ask
03:30 - 04:00 Chad GPT for advice, when we have AI park or car, um it's everywhere. So, and especially in the field of health law and medicine, there have been big advancements in using AI technology. AI, for example, might help facilitate the efficient allocation of healthcare resources, streamline administrative tasks, improve diagnostics, or develop personalized treatment plans. AI is also used for early detection of certain diseases or for designing medicinal products. Some
04:00 - 04:30 of the challenges are for example houses um access to health data, liability for harm caused by defective products just to mention a few. So regulators around the world are looking at these issues and they're evaluating whether the current legal frameworks are sufficient to address those old issues in an IA context or whether new legislation is needed. And as we probably all know, the first jurisdiction to adopt a comprehensive framework to regulate AI was the European Union. The EU's AI act entered into
04:30 - 05:00 force on August 1st, 2024, and it adopts a risk based approach to AI. So AI based software which is intended for medical purposes would be considered a high-risk AI system. Um and that carries with it um that classification then has certain requirements. The EU is also working on a liability regime for AI and has recently established a European health data space which enables a secondary use of electronic health data for research and innovation among other things because as
05:00 - 05:30 we all know AI needs reliable um data to be trained but particularly health data is very sensitive data so that always needs to be taken into account um for example by establishing sufficient safeguards and so on. In addition to the EU on September 5th, 2024, the Council of Europe framework convention on artificial intelligence and human rights, democracy and the rule of law, the AI treaty open for signature. But unlike the EUI Act, um
05:30 - 06:00 the AI treaty from the Council of Europe is just a framework convention um which establishes broad commitments by its parties but leaves the details to national legislators or other instruments. So, as you can already see from this very brief overview, AI in healthcare and in medicine touches many areas of the law and there's not just one specific legal framework that applies. So, we have to ask ourselves, do we need one or more new specific legal frameworks just for AI in this context or how does AI
06:00 - 06:30 fit within the existing legal frameworks? And I hope that we um can address some of these questions today um with our panel. So let me introduce you to our speakers. We are lucky to have assembled such a knowledgeable group. Um all of them have contributed either as editors and or authors to the recently published um research handbook on health AI and the law. So our first speaker is um Barry Solomon who will give us a general overview of
06:30 - 07:00 the topic including a discussion of how research in this area emerged, how the book contributed to framing some of the issues and how the field might develop moving forward. Dr. Barry Solomon is the associate dean for academic affairs and an assistant professor specializing in healthcare law at um HBKU law in Qatar. He's also an adjunct assistant professor of medical ethics and clinical medicine at wild Cornell medicine Qatar where he serves as co-director of the intersection of law um and fellow of
07:00 - 07:30 Harvard Medical School center for bioeththics. Uh sure he holds a PhD in law from the University of Cambridge and is a fellow of Harvard's medical school center for bioeththic excuse me he was formerly editor-inchief of both the Cambridge international law journal and medicine at law. He's co-editor of the research handbook um which is the leading book in the field. He has published in leading journals on the regulation of AI in healthcare and was lead principal investigator for a grant at HBKU that created guidelines for the
07:30 - 08:00 development of AI in healthcare research. His work on AI has received the science and sustainability award from the British Council Qatar. So uh welcome Barry. After that we will hear from Glenn Cohen who will focus on liability and informed consent. Professor Cohen is one of the world's leading experts on the intersection of bioeththics, sometimes also called medical ethics and the law as well as health law. He also teaches civil procedure. He's an elected member of the National Academy of Medicine and has advised former US Vice
08:00 - 08:30 President Harris on reproductive rights, discussed medical AI policy with members of the Korean Congress, and lectured to legal, medical, and industry conferences around the world. This work has been frequently covered by or appeared in media venues such as PBS, NPR, ABC, NBC, CBS, CNN, the New York Times, the Washington Post, the Boston Globe and others. Thanks for joining us. Um after that, we will hear a presentation from Vera Luchia Raposo who will give us a
08:30 - 09:00 European perspective of this um topic. Verapos is a legal scholar with a broad academic and professional background in law and technology. She holds a law degree, a post-graduate diploma in medical law, and both a master's and doctoral degree in legal and political sciences from the University of Kimbra. Currently, she's an associate professor of law and technology with aggregation and serves as vice dean at Nova School of Law, Nova University, Lisbon. Da has taught at various institutions including the University of Macau, the University of
09:00 - 09:30 Kimra and Augustino NATO University in Angola and has taken on postgraduate supervisory roles at the University of Hong Kong and guest lecturing at the National Yang Ming Chiao Tung University in Taiwan. Bear is actively involved in the European Association of Health Law and serves as a governor of the World Association for Medical Law. She's also on the editorial board of the European Journal of Health Law and contributes as a peer reviewer for several scientific journals. In 2024, she was named a fellow of the Hastings Center. There has
09:30 - 10:00 published numerous studies, particularly in the fields of digital law and biomedical law, focusing on issues such as AI, medical liability, and the legal challenges presented by emerging technologies like the metaverse. Thanks for being here. And last but not least, we have Cynthia Ho who will walk us through some of the intellectual property issues that are relevant with regard to medical AI. Cintia is the Clifford E. Victory Research Professor of Law and the director of the IP program at Lyola University of Chicago School of Law,
10:00 - 10:30 which is also part of the Beasling Institute of Health Law. She teaches a variety of IP classes, including one that intersects with health law, entitled Global Access to Medicine, a patent perspective, as well as civil procedure. Her research often focuses on IP issues at the intersection of domestic and international law that intersect with health. She's actively involved in a number of IP groups and organizations including as part of a um VOU IP working group regarding AI. Prior to joining the faculty at Lyola, Professor Ho was an
10:30 - 11:00 associate at the IP boutique Fish and Ne um where she primarily focused on litigating high technology cases involving patents, trade secrets and unfair competition. Okay. So, um, welcome Cynthia and yeah, I'm excited to get this discussion started. So, um, let's do that. Barry, I'll, um, hand it over to you. Thank you so much.
11:00 - 11:30 It would be uh, it would be great if we could make the presentation full screen perhaps. Yeah. I'm trying that right now. It should be
11:30 - 12:00 I'm not sure. Is it working? Uh on my side it's not full screen, but it it's okay. It it's fine. We can just go through with it as the slide deck if unless it needs to be full screen. Sorry about that.
12:00 - 12:30 I apologize for the uh technical difficulties. You could select from beginning maybe. Would that work? Yeah, I'm trying. And I just tried it out but it it was working before. I do apologize for this.
12:30 - 13:00 I'm I'm fine to go ahead if if it's um if it's not going full screen, it's okay. Okay. Sorry. So, I I'll get started. In any case, um thank you all for attending uh this talk. I know the focus of this is on researching the field. So I thought perhaps I would give an overview first of the four sort of developments in the field and where the uh focus might be moving forward and how the issues have been framed so
13:00 - 13:30 far. My panelists co-panelists Glenn Cohen Verooo and uh Cynthia Ho will go into more specific legal aspects in in certain areas that I'm going to point out. So on on this opening slide here is is the uh Apple Vision Pro and it just goes to show uh the investment that's going into this space as even devices like this are seeing quite heavy investment from medical companies for uses in virtual and other care with AI
13:30 - 14:00 integrated into them. Uh but the use of AI in healthcare is quite broad. Uh we see it being used in radiology uh for scans cardiology and more. I don't propose to go into the use cases in in much detail. Uh my uh my colleagues may give some specific examples but given the short space of time my uh focus is on uh highlighting the the key legal issues. So we can go to the next slide
14:00 - 14:30 please. I've got nothing to declare. The next slide please. So uh first I'll briefly highlight how this re the research landscape in this field emerged. I'll speak a little bit then about the research handbook on health AI and the lawn how it's helped to frame the field a little bit. Uh then I'll I'll speak a little bit about how this might manifest by looking at the legal issues in specialist fields. uh and then I'll point to a potential
14:30 - 15:00 framework for how the uh research in this area might look moving forward. I I will as I go throughout give you QR codes to some of the articles that I've written that might be useful reference points to you as you do your research and you might be able to see how those articles fit into a broader research landscape because it is quite a quite a complex picture that's that's forming and the things that I discuss aren't necessarily the correct way of doing
15:00 - 15:30 things or the best way to approach this field but I think it's there to offer a structure at least and a way of approaching this topic. So next slide please. So on how the research landscape emerged there's a first QR code there to an article that I I wrote for a multi-disiplinary audience which is there essentially to help frame how healthcare lawyers and bioethicists came to the field. AI posed new challenges
15:30 - 16:00 for the uh for technologies use in health care. So for example, the way that AI intersects with matters of informed consent has raised debates uh and issues of explanability. But there are there are lots of uh these will be raised in the in the other talks. There are there are lots of ways that AI has raised new legal challenges. And one thing that uh the healthcare lawyers in this space did in the beginning was to examine existing laws and to ask well
16:00 - 16:30 how do those existing laws uh apply to these new challenges and can we adapt those existing laws to deal with those challenges and invariably the answer is often that yes we can adapt those laws but only to a certain point. Beyond that point, we may need new law. And during that process of analysis, we saw the growth of lots of guidelines around the world soft law guidelines by the OECD, by the uh the EU, by the World Health Organization and so on that helped inform some of those gaps and uh
16:30 - 17:00 governmental and intergovernmental discussions about developments in this area. And so this is in some ways informed the formation of new law. as Jenny mentioned in her introduction, the AI act um is an example of that, but we'll see this inform more work that's going on uh moving forward. So, that's sort of the broad uh how this research landscape emerged. Um my uh my co-editor uh Glenn who's on the panel was one of those very uh early u dis uh sort of
17:00 - 17:30 researchers in this field. So, we can go to the next slide please. Yeah. So, yes, that's correct. So, here's a QR code to our book, which is open access. And in the book, we try to frame at least some of the legal issues that emerge. It's not necessarily the the first framing that's been done of this. There have been other articles. In
17:30 - 18:00 chapter one of the book, Glenn and I give quite a comprehensive overview of how the field came to be, what the early research was, and that's, I think, a really useful reference point for those just trying to situate themselves when they're trying to do research in this field. Uh, the chapter one's called a framework um for health, AI, and the law. And I'd encourage you to go to that as a resource. Um but when we were framing the book itself at least if can go to the next slide please we
18:00 - 18:30 identified uh six key legal issues that seem to arise time and again in the field and they're usually to do with there are three issues to do with data that arise quite often. So discrimination, algorithmic fairness and bias, data privacy and security. Uh those are three separate issues. Then there is uh issues of medical liability which Glenn will speak about next. Informed consent, IP which Cynthia will talk about as examples. So these are the
18:30 - 19:00 sort of core issues that tend to come up time and again. The extent to which they are relevant to particular research depends on the application of AI we're talking about and the segment of health care in which AI is used. Beneath the law is obviously ethics. And so we also have chapters which consider uh secular ethical issues and uh religious ethical perspectives such as Christian ethics and Islamic ethics. So uh but there are there are others in the field. Those are just to
19:00 - 19:30 give a flavor for uh how they might frame the legal issues that arise in a given jurisdiction and context. So for example where I'm living in Qatar Islamic bioeththics does frame the formation of healthcare laws. that would be to a lesser extent of course the religious ethical perspective in a more secular society uh where there might be different perspectives. So we have these core six legal issues in any case and if we go to the next slide please. So one of the inquiries we did
19:30 - 20:00 then was to do sort of a an examination of of well what are the what are the countries around the world doing right now to look at these legal issues for the use of AI in health care. And so we surveyed the US, the EU, China through the greater Bay Area, the UK, the G three GCC countries, Singapore and South Korea. And invariably what we found was um the development of medical device regulations to incorporate guidelines to
20:00 - 20:30 do with uh a devices that incorporate AI. So guidelines to cover such devices. So before you would have medical device regulations that applied to static devices that don't change over time, but now we're seeing the evolution of these guidelines to deal with more adaptive algorithms as they're incorporated into medical devices. And so the way this manifests can be a little bit nuanced though. So it's not just that
20:30 - 21:00 straightforward. Some countries are more uh robust in how they incorporate these guidelines than others. So if I say if if I look for example at Singapore and Saudi Arabia, they're pretty robust. The FDA is looking a lot into this space and there's a lot of scholarship in this area. So if you're contemplating well, if your research is in the medical device space, these chapters are very useful in that regard. One big problem though is that a lot of what is coming out of these countries um can we go one
21:00 - 21:30 step back, one slide back please? A lot of what is coming out of those countries, in my opinion, does not really deal with these core legal issues in a in at least a coherent or comprehensive way. The developments in those countries might in some ways touch on these issues or these these legal topics, but there isn't a coherent framework as applied to health care that brings together these different legal issues. And so I think there's work to
21:30 - 22:00 be done in that regard that is um that I'll speak about in a moment. So um there is some evolution on on data privacy laws and considerations about how data protection for example applies in AI but again there's not really there hasn't been for example a a law or some big guidance that's come out on medical liability issues as they pertain to AI or some big change in guidance that I've
22:00 - 22:30 seen at least at in a national health service somewhere. So there's definitely work to be done or or guidelines on informed consent in a comprehensive way that I've seen. Maybe Glenn might have updates in during his talk. So can I move forward um couple of slides? So to continue then down this this path, we have these six legal issues. We know the countries around the world aren't really dealing with those in a coherent way. So I've published a few articles in the last year in which what I sought to do is to see well how
22:30 - 23:00 do these legal issues manifest in specific areas of health care. So for example in that first uh article on the left side of the screen is about uh the use of generative AI ch GPT in the mental health context and what I found in my analysis was that privacy is the biggest issue there followed by consent for use and then data security. When I looked at in the another article on the right side there, the use of AI in post-secue to long-term care, we see the
23:00 - 23:30 priority is more on ethical issues such as autonomy, discrimination and bias issues and then data privacy and accuracy followed by issues of consent, licensing and liability. Can I go to the next slide please? Here's a couple of other articles. By the way, these are all open access. So here's another example. So the use of AI in psychiatric wards and within that again privacy is a core issue in in terms of the use of perhaps AI within cameras. Consent for use uh
23:30 - 24:00 from the courts themselves becomes a specific issue. Um data privacy and security in the context of mass amounts of extensive sensitive uh mental health data which is designated as being extra sensitive by law. I um mentioned metaverse and so we're both interested in this area. uh in in my article on how do virtual devices that integrate AI how do these uh issues play out well liability is a big issue there and informed consent so it's the same
24:00 - 24:30 legal issues but the hierarchy in which of their importance which seems to come out from an analysis is a little bit different so that's one thing to pay attention to when researching this area and I go to the next slide please so then if we have that in mind if one the big question then is how does the field look moving forward and how might the field develop moving forward and and I think this is a big open question and different people will have different opinions. I'm at least giving an example
24:30 - 25:00 here from um just a perspective paper that a c some co-authors and I here in Qatar have written it's coming out in NPJ digital medicine within the next month and um when we were looking at Qatar UAE and in Saudi Arabia we we we saw a pattern emerge that might offer some sort of framework for how to research this area. So that is instead of just looking at the these legal issues in isolation or the development of medical device regulations, we identifying gaps across the whole life
25:00 - 25:30 cycle of AI. So from research and development from a from when AI is arising to that medical device approval process, including devices that might not fall within medical device regulators remitt like the FDA. There are some devices that won't be captured by those. and beyond that to the the use of AI in clinical practice and what that means for uh governance and oversight at that stage. Um so next slide please. So and for the first stage um we've
25:30 - 26:00 already developed in Qatar guidelines research guidelines for healthcare AI development which I put the QR code on there um and that is points to um having robust guidance at the very initial stage when you're developing AI that contemplates these legal issues early on and contemplates the use of AI early on so you're preempting those issues. Uh next slide please. Next slide. And the next
26:00 - 26:30 one we also if we look at the Saudi example that is an interesting one of phase two. Saudi Arabia already has uh interesting guidelines which incorporate uh separate guidance from the World Health Organization on assessing whether a medical device uh that incorporates AI complies with certain standards. So if we go to the next slide, we have here the this is the name of the guidance, the MDSG10. Next slide please. Um and here is a an analysis
26:30 - 27:00 that I did of that which shows that it discusses how that phase two might go in a separate article. So how does the Saudi Arabia approach fit into that? Next slide please. on the back end then is asking well once a we have an AI system and it's in practice in use how how should we develop guidelines in this area and interestingly Abu Dhabi and Dubai in the United Arab Emirates have developed guidelines uh and actually policies with
27:00 - 27:30 the force of law uh that uh sort of govern that space so for example if a hospital uses AI what what must they comply with and so there's we wrote a chapter in the book that covers that but there's more research to be done in that space. So um next slide please. Um and on on that back end uh separately I've I've quer one one sorry one slide before. So on that back end, so if we think about policies that talk
27:30 - 28:00 about implementation and practice, but this is where we have those legal issues will particularly play an important role that we identified in the book like medical liability and informed consent and how do we tie those legal rights that exist in law to the use of AI back to the patient and I've discussed elsewhere do we for example need a patient bill of rights in this space that can thread these different parts together. So uh next slide please. So just to bring this together, I know it's a whistle stop tour. Um what we've seen then is the emergence of this research
28:00 - 28:30 landscape uh and what healthcare lawyers sought to identify as being new challenges in the space. With the book, we tried to frame some of the core legal issues that are in the space and identify what regions are doing in that regard. And what they're doing is I'd say premised a lot around medical device regulations, but more work is needed. There's also we need to think about how those laws or legal issues apply within subsp specialties because their priority can differ or the emphasis on them can differ depending on the specific
28:30 - 29:00 application. And then finally, can we bring this then within a broader governance framework that doesn't just capture AI in a peacemeal fashion, but considers the development of AI from very beginning at research and development to it being incorporated into device and then being used in clinical practice on the back end. And these are just some of the I think the big issues that we need to still look at and develop with research in this area. Um so happy to discuss more in Q&A and
29:00 - 29:30 uh thank you all for u being patient with me in this pretty broad overview. Thank you. Thanks R that was very interesting. Um this is a good overview and I think that's good a good way to get us started. Um I would just have I just have like one followup and we can talk about that more um during the Q&A. I was just wondering I mean you talked about a lot about guidelines being developed at the moment. So these are all just still like non-binding in your opinion. Do you think um at some point we'll have a more
29:30 - 30:00 binding maybe even international treaty to tackle these issues or um do you what's your opinion on that? In general I'm doubtful about international treaties but the the the one thing that makes health care unique is that at least the standards of medicine as it are global. So the standard of care is based on best practices and science and so I think at least we have a basis for agreement there on something that's broad and applicable and there is work
30:00 - 30:30 going on at the World Health Organization in that regard. But whatever guidelines you develop I think has to be done cautiously. The guidelines we've developed in in Qatar were done in step with the Ministry of Public Health from day one over the the course of this three-year grand project. And still now at the end of that we need to think very carefully about moving from them being non-binding to binding and that I think has to be a very careful process so that there aren't any unintended consequences for researchers in the space but happy to discuss more in Q&A. Thank you. Thanks. Okay. Um
30:30 - 31:00 let's turn over to um Glenn and um Excellent. Can everybody hear me? Okay. Informed consent. Excellent. So, we go to the beginning of my slide deck. Uh, just that first slide. There we go. Uh, next slide, please. Here are my disclosures just in case you're interested. Uh, next slide. Just some examples of what we're talking about. Barry did an amazing job
31:00 - 31:30 giving us an idea of the sense of the world, but I'll just give you a few examples, right? So using AI to determine whether colonic lesions from a colonoscopy are malignant or benign in in vitro fertilization using AI to determine the starting dose of follical stimulating hormones chat bots including mental health chat bots or general chat bots giving mental health advice lots of discussion uh this week in the news about that um the Stanford advanced care planning algorithm also known as the
31:30 - 32:00 death algorithm which Stanford's hospital uses to predict 3 to 12 months month mortality of patients a proxy for when to initiate a paliotative care discussion. IDXDR for diabetic neurinopathy and then ambient listening which is becoming very common in many hospitals. One of I think the first ones to really expand very quickly. Uh next slide please. So uh I'm going to talk about uh liability. uh this is a multi-level problem because we have physicians, we
32:00 - 32:30 have developers, we have hospital systems and actually the interaction between these is not simple or smooth. Now I want to emphasize at the start that in the US and I suspect elsewhere there have actually been shockingly few actual reported cases of AI and medicine liability cases that have made it to published decisions. Uh the ones we do see are mostly actually surgical robots. That's probably the most common one, the Da Vinci robot in particular. And you might think, okay, well, not everything's reported. But when I talk
32:30 - 33:00 to malpractice insurers, for example, they also tell me that they see very few in their closed claims database. So even though I'm going to talk about liability, I want to emphasize that actually we have relatively little. And maybe that'll change as AI expands, but there we are. So uh this colorful diagram is from a paper I did with Sarah Girka and Nicholson Price a few years back. And it's just a simple attempt to run through a very simple styized case and to show how AI makes a difference from malpractice liability for physicians. And basically we're
33:00 - 33:30 imagining a uh somebody with ovarian uh cancer a decision whether to administer the standard of care dose or a higher dose of a chemotherapeutic agent. And the idea is well the AI could recommend the standard of care dose or a higher dose. The AI could be I'm walking through the columns now correct or incorrect. uh the physician could follow or reject the AI recommendation. Uh and then there's a patient outcome. That's the fifth column. And then we get to the sixth column which is the stoplight uh
33:30 - 34:00 column where we see uh the results. And what you see is this is basic tort law. It's common not just United States but to most of our peer countries. The law treats them very differently. If there's no injury, there will be no liability. That's good whether it happens because the physician accepts the correct recommendation scenarios one or five or rejects an incorrect recommendation scenarios four and eight. Second, uh tort law typically privileges the standard of care regardless of effectiveness in a particular case
34:00 - 34:30 whether providing that care leads to a good or bad outcome when the physician follows the standard of care. So under current law, you only get into liability in the red boxes. That is when the physician does not follow the standard of care and an injury results. Now that just is like following the logic of tort law, but it has the following kind of implication that's a little bit problematic. The safest way from a liability perspective to use medical AI
34:30 - 35:00 is basically as a conformatory tool to support existing decision-making processes rather than as a way to improve care. That means that when you an AI tells you to do the thing you were going to do anyways, oh my god, it is the most brilliant AI I've ever met. Brilliant. When it tells you to do the thing you weren't going to do, oh well, I want to be very careful here and very cautious because if an error results, I'm going to be liable. Now you know that is a way to practice medicine but if the entire benefit of medical AI is
35:00 - 35:30 to identify precisely the cases where the standard of care is not what we should do where a better outcome is available where we can personalize it's kind of a profoundly depressing result that tort law incentivizes you against that in exactly the case where it's supposed to add value. Now tort law is not forever fixed over time standard of care changes. You think about something like an X-ray for example. There was a period of time before X-rays where we did all sorts of things and there was no X-ray available. Today if somebody has a break and you think it's a break and you
35:30 - 36:00 don't take the X-ray while the failure to use the technology may be below the standard of care. So the same may become true with medical AI but toward law is inherently conservative medical malpractice law in particular lots of respect for the respected minority and this is a slow process that can only really be supercharged I think by either governmental uh interjection of itself or some elomeration of expert reports uh physicians groups royal societies in Canada and elsewhere kind of recommending and saying the standard of
36:00 - 36:30 care. Now um in the book chapter in the book that Barry mentioned I did with Sarah Gka and Nicholson Price we go a little bit further. We show that causation is going to be a big challenge in the AI tort context with medicine. Demonstrating the cause of an injury is already hard in a medical context where outcomes are frequently probabilistic rather than deterministic. But when you add in AI models that are often non-intuitive and sometimes inscrutable will likely make causation even more challenging to demonstrate. We also show
36:30 - 37:00 that from a systemic perspective, individual health care professional liability, though complex, represents only a piece of the larger puzzle in terms of designer liability and hospital liability. And I'll get there in a moment. also want to emphasize the role of FDA here that based on how rigorous a pre-market review you have that will sometimes uh preempt state toward law in the United States and be a reason in other countries to say that actually what looks like torches was not torches
37:00 - 37:30 and also there's all these interesting questions about the idea that hospital systems are co-developing some of this work with physicians the ecosystem of purchasing the insurance and the malpractice insurance ecosystem and how that changes incentives so Happy to chat about all that during the Q&A, but let's if the slides are working, let's go on to the next slide. Let's try this. Drum roll, please. Dot dot dot. There we go. Okay, happy. Um, so I mentioned before there are very few cases that have resulted in published decisions where AI is involved. Uh, now I just want to tell
37:30 - 38:00 you what those cases actually look like. Uh, the very small number. Michelle Melo and Neil Guha did a great paper in the New England Journal of Medicine where they basically captured all the cases I think since 2000. It's not a huge number. It's I think under 30. And essentially most of the cases that make it to publish decisions actually reject liability for a number of reasons. And I'm just going to walk you through some of those reasons. When it's lawsuits against developers, sometimes the uh
38:00 - 38:30 products liability lawsuits are uh dismissed because the the claims of the courts are that software is not a product but an intangible. Sometimes when there has been some FDA review and there's relatively shallow FDA review in the AI context, I want to make that clear. But in the instances where there has been some FDA clearance, sometimes there's a holding that that has preempted state toward law. A lot of times one of the biggest hurdles for a plaintiff is in specifying what a reasonable alternative design would be. When you bring a design defect case, you
38:30 - 39:00 have to have that in mind. And to do that, you need to know a lot about how the algorithm works. And if it's something like a deep neural net, there may be thousands of variables and getting that information is costly. Some of it may be protected by trade secrecy law, very confusing to a factf finder if you get in front of one. So this makes these cases hard to bring. And then finally when it comes to developers interestingly and here we don't have systematic work because these things are often secret but in the contracting between developer and purchaser there's often a portionment of liability or of
39:00 - 39:30 indemnification that typically favors the developer and might be another obstacle to actually recovery. I could see the next slide please. Uh there we go. Okay. So when it comes to lawsuits against physicians they fail for the following reasons. difficulty in showing the acceptance or departure of the air recommendation was unreasonable. You have to show that the physician should have departed from or accepted the recommendation for a particular patient. That's hard. Uh instances where the courts say it was uh
39:30 - 40:00 not foreseeable that the model in output was inappropriate for a particular patient and here the opacity of some of the algorithms might contribute to this difficulty. Demonstrating causation. I mentioned this already but here there's a lot of counterfactuals. Had the physician followed rejected the AI, the result would have been different for the patient. That's just a long chain of uh counterfactual causation you have to show. And more generally, it's just very difficult to get all the information you need to bring one of these lawsuits. In
40:00 - 40:30 the United States, where these lawyers from malpractice typically operate on contingency, that is, they don't get paid upfront. They get a share of what is paid on the back end. Um, if you are a plaintiff lawyer and you are offered a case where somebody shows you that a surgical instrument was left in the patient's stomach or a case where the claim is here's this AI, it might be involved. It's extremely complicated and the question is what is your incentive to bring the lawsuit? The damages are the damages. the person suffered what they suffered and why opt for the
40:30 - 41:00 incredibly difficult trade secrecy protected complicated expensive expert case when you have this other case sitting over here that's much more straightforward so I also think there's an interesting civil procedure aspect here in terms of what's incentivized to bring uh next slide please I'll just say there's the book by the way Barry open access we love giving away things for free so uh great free book um against hospitals. I'll say that in fact there's actually the fewest
41:00 - 41:30 number of cases in the small data set that we have. In some ways that's a shame because I actually think hospitals are the most natural place to locate liability in this space. But in terms of hospitals, I'll say that there's a bunch of problems you might anticipate. Vicarious liability theories may run into problems that a physician is not an employee. Direct liability theories like negligent credentiing or a duty to evaluate products. They're very immature toward theories in general when it comes to suing hospitals and when you add on
41:30 - 42:00 the novelty of AI even more difficult. Okay, so overall lots of reasons why these cases are not successful and if you think that actually these things are being implemented too quickly and harming patients, you might worry about under deterrence under compensation. My own view as a longtime skeptic of medical malpractice in general is that perhaps we might be better off uh establishing more socialized forms of reimbursement and compensation here. The idea of like having manufacturers and developers and hospital systems pay into
42:00 - 42:30 a system that works like workers compensation or the 9/11 victim's compensation fund. But I don't think that's likely to be in the future. Nicholson and I in this paper in Depal have also suggested this complex way you could have liability shift back and forth between hospital systems and developers as an information forcing penalty default. That's the kind of things law professors love. We think it's kind of smart but actually the chance of implementing it we think are very it's very low even if you think enterprise liability would be a good idea. Okay, next
42:30 - 43:00 slide. Now for something completely different. I think maybe I have five or six minutes left. The folks will tell me in the chat if I'm going too long, but I want to basically just talk a little bit about informed uh consent for a moment or two. And in the book with Barry, uh the chapter I did with Andrew Sloce, my former student on informed consent begins with the following vignette. A patient is diagnosed with stage one non small cell lung cancer. The patient's physician recommends surgery advent chemotherapy explaining the benefits and
43:00 - 43:30 risks of each. The physician however does not explain that the standard treatment guidelines for the patient would counsel against chemotherapy and that the more aggressive treatment has been recommended for the patient by an AI system based on imaging data from the patient. Only after treatment is done does the patient learn that AI was involved in the care decision. The patient is distressed as he sees it. He underwent a potentially unnecessary treatment because this physician outsourced decision-making to a machine without letting him know. So this is
43:30 - 44:00 going to be not an uncommon experience perhaps in the future. And you know on the slide we also have that cute little seal is Parro the therapeutic robot who is a cuddly little toy who is given off into patients with dementia and they pet him and they love him and they coup at him but actually he's also a little spy because he's collecting all sorts of information and feeding it back to a medical analysis system or the physician. Is that a problem? And if you would play this clip for like maybe 20
44:00 - 44:30 seconds if you don't mind. Let's see if it's going to play or not. If not we can skip it. Can we hear it? I think maybe the sound might not be on. Okay, don't bother. It's totally fine. I'll just tell you what it was. It's a clip of uh it's essentially a clip of somebody, it sounds like calling a Chinese restaurant to make a reservation. This is a clip that's 8 years old, it's Google Duplex. And essentially, even though it sounds like a human being and is responding to all sorts of questions and prompts about how many people, what's available, it's actually an AI system doing this. And I
44:30 - 45:00 offer this to people just a very simple illustration to show we are rapidly entering a future where not only is AI involved in our care but actually even you know don't listen to your ears and don't listen to your eyes because the entities you are interacting with may be artificial intelligence they may be chat bots they may be deep fakes and we have no idea. So we really do need what I would call a right to know when AI is involved as a legal matter. My own view and I did this at long article in Georgetown Law Journal just a few years ago. My own view is that the current law
45:00 - 45:30 of informed consent and breach of informed consent is unlikely to capture cases where AI is involved uh as being ones that bring on tort liability. So we're not going to get there through tort liability. I also think as an ethical matter there are some complicated questions here about whether we are exceptionalizing AI in terms of disclosure or otherwise. And let me maybe end my talk by giving you this uh sort of spectrum of cases and ask you where AI falls. On the one hand, we know
45:30 - 46:00 we have case law of this of a substitute surgeon. If somebody else scrubs into your surgery and your physician fails to tell you, that's a breach of both the ethics and the law of informed consent and of injury results, you can sue. On the flip side, when you last saw your doctor and had a bad cold and thought you should get antibiotics and the doctor made a decision whether to give you the antibiotics or not, or you know, a doctor made a decision about watchful waiting for a potential cancer versus a surgical option. In truth, the doctor's
46:00 - 46:30 decision probably depended on all sorts of things, including memories of six patients you reminded him of, conversations with five colleagues, vague memories of six back issues of New England Journal of Medicine or JAMAMA, medical school lectures, they kind of remember what they had for breakfast, and how annoying a patient you are, right? Typically, the physician doesn't have to disclose all of the inputs to that black box in giving informed consent. How should we think of the involvement of AI in this setting?
46:30 - 47:00 Is it more like the substitute surgeon or more like the laundry list case? And does it matter if the FDA has reviewed the AI? Does it matter if a hospital system has gatekeeper the FDA has come in? Does it matter whether the FD the AI is interpretable? Does it matter whether the physician himself or herself understands how the AI works? And does it matter how much they have a reason to think the patient cares about this? So, I'll leave it there. happy to chat more, but thank you so much for having me.
47:00 - 47:30 Thank you, that was oh that was great. Um especially walking us through the cases and I do have some questions but I think I'll save that for Q&A so we can uh first uh hear from the other speakers. Um so our next speaker then will be Vera and I'll turn uh I'll stop sharing my screen so she can uh start sharing hers. Uh yes. So thank you so much. Let me see if I can do it. I'm not very good with technology. One wonders. So, um well, first of all,
47:30 - 48:00 thank you so much for having me. I'm here at Sunny Lisbon today. And basically, I'm going to present you my chapter uh the chapter I prepared for the book of of Glenn and Barry. And in that chapter, I dealt with what I believe to be uh yeah, an if not an impossible mission, at least almost impossible mission. And by the way, I don't even am a I'm not even a fan of Tom Cruz, but the title is very
48:00 - 48:30 appealing when you deal with the use of uh AI in healthcare. And for this particular presentation, I will focus on facial recognition technology. Some of the issues are particular of this specific type of AI. Some others are much more general. Now, uh here are some of my references about these issues. Obviously, they are focused on UO. That's uh the the legal framework where I operate. And uh starting with the topic, I would say that when most people
48:30 - 49:00 think about facial recognition, they tend to think about I don't know law enforcement, not so much about healthcare delivery. But the fact is that it's not only theoretically possible as in real life is being used in healthcare delivery. for two main purposes. So one of them is connected to what we usually think as facial recognition being that is security mechanism and in this regard it can operate in two ways. So one of them is authentification
49:00 - 49:30 verification basically is what you do when you use your face to unlock your smartphone. You are telling the system you are telling your device look this is me this is Vera. So basically you are confirming that you are the person that you claim to be. This is what we we call a onetoone um identification. But then we have other cases which is one to many identification also called recognition uh in which someone else not you is trying to confirm that you are the
49:30 - 50:00 person that they believe you are. So that's what happens when for instance you have facial recognition cameras in in an airport or in a government facility or even in the streets. Now both of these can operate in healthcare. Now just imagine um nowadays when you go to check in a hospital at least in more traditional ones you go to meet a human and you identify yourself eventually with your identity card. Some more modern hospitals have machines so you
50:00 - 50:30 don't need a human. just put your identity card and immediately you are identified. But what if that operates by using um facial recognition? So you could put yourself in a specific place in the hospital in a specific corner and then you will be immediately identified. Now this can be very useful in particular for patients that struggle with communication because they do not master the language, elderly patients, uh young children or patients that reach
50:30 - 51:00 the hospital in an unconscious state. Another possibility is to use this technology to allow doctors to have access to the patient's health record. As you know, not everyone not every member of the of the medical team can have access to the patient's medical record of a given patient. only the ones that are involved in the care provided to that particular patient. And what we use nowadays are well basically traditional credentials, emails and passwords. But this could be done and I would dare to say with a higher level of
51:00 - 51:30 safety by using an identification with our face just like we do with our smartphone or also to give patients access to his medical information. So in the comfort of our homes we could have access to our health record or exams in our devices again identifying ourselves with our face. I see particularly interesting this technology to avoid mistaken identities. You see we still have cases of I don't know the wrong patient being surgically intervene. uh
51:30 - 52:00 we misidentify patients especially if again they do not uh talk our our same language if they are already with anesthesia and so this could be used eventually in parallel with other identification mechanisms to be sure that Mr. X is indeed Mr. X. It also could be useful to control who goes into particular um departments or sections of the hospital because we don't want patients, patients relatives or even non-authorized health staff to go into
52:00 - 52:30 restricted areas and also to control who goes out. So not only who goes in but also who goes out. We have cases of patients especially elderly patients eventually with dementia leaving the hospitals uh without anyone to be with them and often the the outcome is not a very happy one and we have the infamous cases of kidnapped babies again facial recognition could help us to prevent such scenarios. So this is um let's say
52:30 - 53:00 the more traditional use of facial facial recognition technology as a security mechanism. But I'm especially interesting in its use as a help tool that is as a medical device. And I know this looks a little bit far-fetched but indeed it can be used as a medical device because one of the many uses of facial recognition is facial characterization also known as emotion reading. Now you will might remember many many years ago when President Bill Clinton said the famous words I did not
53:00 - 53:30 had sex with that woman. I repeat and then we found out that well that was not really the case. Well depending on the concept of sex of course but kind of blur. And when we found out that he was misleading us you had all kind of experts coming to comment. You see what he's doing with his mouth and you see the eyes and you see the face. Well, basically what they are doing or the where what they were trying to do is reading micro expressions. Uh and that can be done by analyzing our facial
53:30 - 54:00 features and some humans can do it. You need a trained eye of course but facial recognition technology can also do it. Now how can this be used in healthcare? Well for instance to identify the existence and and the quantity of pain. Of course you could ask the patient does it hurt? Are you in pain? But remember some patients cannot communicate that well uh you have the case of young children including newborns how can they communicate that right that they are in
54:00 - 54:30 pain and this is not only for physical pain but also emotional pain meaning that this technology could be very useful to identify very early signs of medical burnout and the consequence suicides that frequently it involves. It could also be used to check if patients are taking their medication. Of course, this cannot be used in every single disease, every single medication, but in some cases, the lack of medication can be showed in your face again by by analyzing your micro expressions. And
54:30 - 55:00 also as a followup mechanism for patients that have been discharged, they are now at their home. You want to see how the disease is evolving by simply doing a a smartphone scan and the analysis of micro expressions. This could be a huge assistance for doctors. Now another possibility is to use facial recognition not only to diagnose but also to predict medical conditions. Uh it has been used to diagnose some genetic conditions because for some of
55:00 - 55:30 them like some forms of autism or the chronal syndrome the the existence of these diseases can again be identified by some features in your face but they are very mild signs. So it's very difficult for a human doctor to identify them even because they tend to be very rare forms of those diseases. So it's not like doctors find these patients every day. Again facial recognition can diagnose these conditions in very early ages and immediately start the treatment
55:30 - 56:00 as um a prediction mechanism. Well remember that your face says a lot about yourself and and your your future health. So for instance, if you have spots in your skin, careful with the sun, large and red nose, you can have more riskies. It's too much. Uh if you have wrinkles around your mouth, maybe you are a heavy smoker. Now this is very obvious and you can track that with your bare eyes. But at the very beginning, these uh changes in your face tend to be
56:00 - 56:30 very subtle. So you need facial recognition as an indicator that something something not that good might happen in your future. And so you need some change in your behavior or even some treatment. Now this is all very interesting but there is obviously the compliance issue especially here in EU where we are totally obsessed with regulating everything related slightly related to digital issues. Now there are many possible regulations that could apply to facial recognition technology
56:30 - 57:00 when used in the ways I just described. For the sake of time I will focus on three of them. So one of them obviously the AI app. So the AI act is based on the so-called risk based approach. Uh meaning that AI systems are categorized in different levels of risk. This one is the higher level usually forbidden some exceptions. This is the lowest level and then you still have general proposes AI. Now I will I will spare you the entire analysis but what I can tell you already is that the uses of fashion recognition
57:00 - 57:30 that I described in this presentation will be uh considered high-risk AI systems. The what changes is the legal ground. If it is a medical device uh it will be this will be the legal ground. So if it used let's say to diagnose diseases whereas when the technology is used as a security mechanism this will be the legal grounds here but the outcome is the same high-risk AI system and what that means is that these uh AI systems or facial recognition technology
57:30 - 58:00 when used for those proposals has to get the C marking of conformity that some of you might know because it's very common in EU products and for that to happen the product must um be subject to the so-called conformity assessment and if the assessment is positive then it gets the C marking of conformity and can reach the market. Now these are the requirements imposed by the AI act to get the C marking of conformity. Uh it goes from human oversight so some of the
58:00 - 58:30 things that Glenn was talking as things related with data sets to train AI um systems of risk assessment cyber security well you name it. Now we still don't know very well how stringent this assessment is going to be because this part of the AI act is still not in force and the commission has not released guidelines unlike what happens with other parts of the AI act but uh I dare to say that it's going to be a very stringent assessment. Uh so conformity assessment is a must
58:30 - 59:00 under the AI but it's not the only one that facial recognition technology must undergo because when we are talking about facial recognition as a medical device for diagnose for identify pain for instance it it is also under the scope of the medical devices regulation. Now the medical device regulation in article 2 defines medical devices. You can see that software is very clearly linked into there and then uh uh in an extended version of article 2 you can
59:00 - 59:30 see the the aims that have to be um the aims that have to be in place for the software to be considered a medical device. So checks the kinds of uh uses of facial recognition that I described following article two and that means that when facial recognition is a medical device again you have the conformity assessment because medical devices in the U need the C marking of conformity. So you have another conformity assessment one that is
59:30 - 60:00 different from the one imposed by the AI act because the the requirements that are to be assessed are different. We call them the general safety and performance requirements and they are very different because these two regulations have different mindsets. The assessment in both cases is usually performed by notified bodies. Now all around the EU you have these notified bodies. You have some examples. Uh the funny thing is that we don't know very well what are notified bodies. They
60:00 - 60:30 usually are private companies to whom uh the AI developer or the manufacturer whatever you want to call it pays in order to be assessed. We are very trustful people here in the US. So we don't mind if people pay to be assessed. We consider that that's okay. But some people call it the private compliance industry. So you see it's not fully consensual. And the question here is how many assessments will you have? Because in in my case scenario, you will have one imposed by the medical device regulation, another one by the AIA and
60:30 - 61:00 by all means you can have other assessments depending on the number of regulations that apply. These conformity assessments in order to get the C marking of conformity are very very commonly. Now my take and obviously the most desirable scenario is that you have only one conformity assessment performed by one single notify body that u calls to him all the possible assessments with all the possible requirements. This will
61:00 - 61:30 be the easiest way. Now in theory I think this is the idea under the AIX this is what results but in real life this might not be possible because you see each notify body needs to have a specific accreditation in order to perform the assessment in light of a specific regulation. So for instance you might find a notify body that is able to do the assessment under the medical device regulation but not under the AI or under all the other possible applicable regulations. And if that is
61:30 - 62:00 the case, you will have various conformity assessments either in parallel or subsequently eventually with different outcomes. And well, if that happens, this will be so complicated. Can you imagine? It will take loads of time. It will be very expensive for companies, too complex, too bureaucratic, I don't know. And finally, as a kind of a cherry on the top of the cake, yeah, our bill of GDPR, the most stringent data protection uh law in the entire world. Now there are many things to say about
62:00 - 62:30 GDPR and facial recognition technology. For the sake of time I will focus on the legal ground because you see facial recognition deals with a very specific type of personal data biometric data which are our facial features and this is biometric data are what we call sensitive data. So they require specific safeguards an extra layer of protection. This is this is my face whenever I talk about sensitive data. Whenever I talk with my clients of my students, I do this off of phase. Now what happens here
62:30 - 63:00 is that the basic requirement of the GDPR is that every time you need to process personal data, you need to find a legal ground. This means that the situation you have in your hands must meet one of the scenarios described in the law. In our case, article six that provides the legal grounds for the processing of personal data in general. Now the caveat here is that when it comes to sensitive data as it is the case of biometric data you need an additional legal ground. So besides the
63:00 - 63:30 one of article six you need another one in article 9. Now the question here is that the two legal grounds do not match. I mean the list in article six is not exactly the same one as article 9. So sometimes we feel you are doing a puzzle in which the pieces do not fit together. It's like a game and it's tremendously difficult especially when it comes to biometric data. Now some of my colleagues will solve the issue by saying just rely on consent. It's a legal ground and you can find it both in article 6 and article 9 very seducing
63:30 - 64:00 very easy. Well I always reply guys get away from consent is the tricst legal ground for so many reasons I will spare you the speech but we should only use consent with when all the other legal grounds cannot operate. So moral of the story, it's extremely difficult to find a legal ground on the GDPR to use uh biometric technology, any kind of biometric technology including facial recognition. Now we're coming to the end. How will this work together? And by the way, this is a picture of my my my
64:00 - 64:30 faculty. All very welcome. How will this work together? Quite frankly, I don't know, but I think it's an almost impossible mission just like me doing this Tom Christie. Almost impossible. Thank you so much. Thank you. That was very interesting, especially I'm impressed how you walked us through the legal basis at least, you know, gave that overview and we can definitely I have definitely some questions on that that we'll um address in the Q&A. So, um let's uh turn to
64:30 - 65:00 Cintia and a totally different topic um the intellectual property issues that um come up in the yeah with regard to AI and health law. So Cindia, I think you're still muted though.
65:00 - 65:30 Yeah, we can't hear you yet. Okay. Um, thank you. Try to share the
65:30 - 66:00 screen. It's telling me I need to restart Chrome. I don't know why.
66:00 - 66:30 Sorry, it it kicked me out for a second. Can you guys all see my slides now? Yes, I did for a second now. Now they're gone again. They were up for a sec. Okay. All right. Sorry about that. Um I
66:30 - 67:00 will go ahead and get started. Um so, uh thanks for having me. Uh I'm going to try to give a little bit of an overview of IP for those who are less familiar. Talk a little bit about how um some examples of IP work for AI and healthcare. And then I think I think they're not sharing now. They were sharing but I I at least can't see them. So they were up for a second. Okay, now they're up. That's good. Now it's working. Yeah. Great. Thank you so
67:00 - 67:30 much for letting me know that it wasn't working. Okay. Um so the plan is start with a overview um some examples and then end with policy issues. Uh so to start on IP as probably everybody knows it is a kind of legal protection. Um but I want to start with the what and why. So as you may know IP protects mostly intangible creations mostly of um
67:30 - 68:00 primarily human ingenuity. Although it's a major question right now, what if AI creates some or all of things, should that be protected? Um, in terms of why, if you're talking to people who own AI or not AI or any kind of um creations and they want it, they want IP mostly for financial reasons, which makes sense. Practically, it enhances their economic benefits. But it's not just
68:00 - 68:30 about IP owners. IP in most situations is something that society decides to give as a policy matter. Primarily it is assumed that AI or rather IP is given to promote innovation intended to benefit society. Now I mention that because it's really important in terms of what we decide to protect or not. Since inherently if you give protection to someone like the creator that means you
68:30 - 69:00 may be taking something away from society or at least increasing barriers to entry. So we all know that patented drugs for example costs more than generic drugs. That's because of IP. So uh it's a constant push and pull that scholars have about how much protection we want to give because it can impose a cost on not just users but even subsequent innovators who might want to
69:00 - 69:30 build upon prior stuff. So one important issue especially in the context of our interconnected global world is that today most countries who are members of the world trade organization which is over 160 countries have to provide IP under the so-called trips agreement. So um the graph I have in the middle um those are all the the countries that are members and must comply with this agreement. The
69:30 - 70:00 different colors just have to do with what time they uh entered into the agreement. Now, what is I think kind of interesting but not always um obvious to everyone is that although there's an international agreement, there are not uniform standards. So that's what the um the image with the hurdle is for because countries have to provide the minimum but but there are not uniform agreements and in fact other
70:00 - 70:30 international attempts to create uniform um it's not entirely surprising because countries may have different preferences. So what I'm going to focus on are the requirements that exist under trips um and also kind of show you what some of the differences are within the existing uh minimums which some people call trips flexibilities. Now um I start thought
70:30 - 71:00 I'd start by highlighting some of the most common types of IP that would be relevant for AI. One is something called uh utility patents. There's separate thing called design patents, but most people when they think about patents, they really mean utility patents. There's also um trade secrets, which technically trips doesn't require we use that kind of protection. You could also be called something else, but it's the same definition. Um and copyright. So, the images I have on the screen
71:00 - 71:30 represent things that I think we might think of to match one of these. So all the gears are for things that are functional, covered by patents. Uh KFC and Coke represent things that probably every day we think of as a trade secret in terms of the formula for Coke or the recipe for Kentucky Fried Chicken. Um these companies have protected these for decades, much longer than you would get through a patent because people can't figure out how to do it. Whereas copyright protects a whole variety of
71:30 - 72:00 different expressions. Um, so now that I've given you that very very short overview, I'm going to go into each type. Um, and before I do though, I just want to mention that there is also the possibility of creating suie generous laws. Um, and that might actually be an issue that we want to explore for AI. So, some countries like the EU already has some suenous laws. uh for example they have a one for database protection
72:00 - 72:30 which might sound like it's for AI but is actually was created before the advent of AI um and some have suggested it's not entirely um that helpful right now for not just AI but in general um but let me go through like the basic types so for patents for functional inventions this is the only kind of IP for which government approval is required and there are actually a lot of requirements. There are requirements for
72:30 - 73:00 both the invention and the application. So the first issue which could potentially be an issue for AI is that the invention has to be permissible. So um I wrote relevant subject matter um and although all countries have to protect inventions under trips, it's not defined what an invention is. Now a number of jurisdictions where AI is protected including the US and in
73:00 - 73:30 Europe have limitations that would seem to apply to AI. So software as well as algorithms have challenges in these jurisdictions. They don't in Qatar and China but most AI currently uh people try to seek uh protection in the US and um and European patent office and there it is a challenge. It's not entirely outlawed but it's under additional
73:30 - 74:00 scrutiny. Um so even if you get over that hurdle there are additional requirements. It has to be useful, new and nonobvious. Um, and even if you have all of those things, the application itself has to have adequate disclosure of the invention. So, this is thought of as a societal bargain in that it's not fair to give someone the exclusivity of patent rights unless you're sharing the
74:00 - 74:30 invention. Um, in some jurisdictions, including the US, there is special disclosure required. So in the US um AI uh machine learning has to disclose some of how that happens. Now assuming you have a patent granted, you might be wondering what the scope is. So patents have the most exclusive scope uh uh for all IP unless of course they're
74:30 - 75:00 invalidated. Um but they have the shortest term. So maximum protection but shortest term. And in terms of maximum protection, you can exclude everyone from making, using, selling, offering to sell, importing your invention. Um, including someone who independently invents or reverse engineers. Um, and although there are some exceptions, they are very very limited. So, for example, most jurisdictions have an exception um, letting the government get around these exclusivities called compulsory
75:00 - 75:30 licenses. But as we saw during COVID, most countries do not use them. So there were countries where like India, their population didn't have enough vaccines. Their court was saying you should use a compulsory license and the government didn't do it. Mostly it's fear that it will uh be problematic in um harming innovation. There's also trade secrets for things that are um valuable when kept secret. um and the owner takes
75:30 - 76:00 reasonable steps to protect it. No government approval is required. All you have to do is keep it secret. Some things that could be trade secret could also be patented, but a much broader range of things can be secret because you don't have to have an invention. It just has to be valuable. Um the scope of this is that it's very limited unlike patents. It only protects against somebody misappropriating like so someone stealing the invention. Um it
76:00 - 76:30 does not protect against someone independently creating the same thing. If you are able to keep it secret, it's potentially of infinite duration and it's very cheap to get. But it's very very fragile and that if the secret gets out, you lose all protection. So you could s someone and get money but you can't continue to benefit from it against everyone else. Um the other
76:30 - 77:00 major kind of IP for um AI is copyright for expression. So um the requirements are a lot easier to meet than patents. It just has to be work that is minimally creative which is less than new and non-obvious. Um, there are also some exclusions for facts, ideas, and methods. So, similar to how a patent can't protect something that's not new. Um, copyright's also similar, we want people to build upon things that everyone knows about. So, a fact can't
77:00 - 77:30 be copyrighted. Naturally occurring thing can't be patented. Um, but the same idea in different forms could be copyrighted. So we could have the same idea of how an app works and if the code protecting that was written in different software code different people could have copyright for each version of that. Um now the scope of copyright is
77:30 - 78:00 um is exclusive. you can prevent other people from making or building derivative works. But there are some exceptions built into a lot of laws for things that are socially productive such as uh archival use or um or teaching. And some countries have broadbased use. You may have heard of something called fair use which is broad but also is uncertain what's going to be covered. Um it's relatively easy and low cost to
78:00 - 78:30 obtain. That is a benefit for the context of AI. There is potential liability in terms of training AI databases because you might be using someone else's copyrighted material without permission. Now, in terms of some healthc care examples, here are a variety of different things, some of which other speakers have talked about. So you could have AI diagnostic tools, AI to help with drug discovery, personal medicine, um wearables, monitoring
78:30 - 79:00 device, and even robotic surgery. Um what a lot of different kinds of AI have in common is they often are using machine learning and database developments. So in those contexts you could potentially have copyright in the database to the extent how you select an arrangement could involve some uh creative expression. So it's not the individual pieces of data that you're
79:00 - 79:30 protecting but how you compile it. So in a nonAI context there's an old case about how um a typical phone book where you list people in alphabetical order is not copyrightable. But if you did something unusual, for example, if you listed people by the type of pet they have, um, or if you colorcoded it by their favorite color, that could potentially be copyrightable. Um now for different kinds of AI um so for example
79:30 - 80:00 if you had um a breast cancer diagnostic tool or a mobile app um you could have a variety of different IP protections for the product for the system and the various processes to train and create the algorithm. Um, I think the IP you're most likely going to use would be patent and trade secret. So, I think I mentioned earlier that algorithms are
80:00 - 80:30 challenging to patent in most jurisdictions, but the method of using the algorithm could maybe be patented and even if not, you could potentially use a trade secret. Um, the product could include some aspects that are patentable. um if it's uh in a way that would be new enough. So like if you use a standard camera um to in do the imaging that's not patentable but if you put a bunch of things together maybe in
80:30 - 81:00 conjunction that would be patentable. Um similarly the software code maybe that would be protectable although if someone does it a different way with the same outcome that wouldn't really give you um much benefit. Now I think there's some major policy issues we need to think about. Uh one is what IP the inventor or owner wants and the other is what society might want IP to consider. So for the owner to consider um there are
81:00 - 81:30 issues a big issue uh would likely be whether or not to get a patent and a major question might be would it be easy to reverse engineer? If so, then patents would be a better way to go because patent protects against that. Also, if you need to share your trade uh your invention to make something, then a patent would be easier than trade secrets where you would have to take a lot of um uh planning to safely share it and not get
81:30 - 82:00 your trade secret out of there. Also, if you want to raise revenue, uh a patent might be more helpful to convince investors. Um so, different things in terms of patent versus trade secret to think about. Um most of the beliefs we already went over. So, for uh time purposes, I'll just go on to societal issues. A big one I think is nonIP issues to think about safety and fairness. So, I think there's a lot of
82:00 - 82:30 discussion and concern about we need feedback loops to make sure the AI is safe and fair. We want to make sure it's training with things that aren't going to be imposing more discrimination. And we really want to make sure it's fair. Um, we do know from other contexts like drug discovery that if there's not adequate disclosure, things can be really unfair and harmful. So people actually died from drugs like Axel and Biac because uh doctors didn't know the
82:30 - 83:00 full scope of data that regulators did which cause some uh laws to change. So things we might want to think about for changing AI. Um even though usually inventors can decide whether to have trade secrets or patents maybe we don't want them to have that decision or maybe we want more disclosure. Um, also even if something is patented, it could be hard to um to make the disclosure. Some
83:00 - 83:30 AI, especially if the algorithms continue to evolve, might be challenging to disclose. We might need to modify things, require some deposit. Um, also existing laws for copyright liability, there's a bunch of open cases, but maybe we need more laws either for copyright or database protection. Um so I know that was a very quick overview. Um there's a chapter which my uh terrific colleague uh Charlotte Schneider and I worked on. Also we both have um all our
83:30 - 84:00 stuff on um SSRN. I have a forthcoming article about a lot of health dangers with trade secrets that are undisclosed in general that include but are not not limited to AI. Uh but thanks for your attention. Thank you Cynthia. Uh I'm sorry to yeah rush you but yeah I thought so if anyone of the audience has any question please use the Q&A feature. Um but um I
84:00 - 84:30 definitely have some questions uh for the speakers. Um so Glenn maybe a question for you. So you were talking about how it's already so difficult to prove liability in general. Um then AI complicates it even more. um is yeah difficult to prove anything because of the specific characteristics of AI um autonomy the black box effect and so on so forth. So do you think maybe in the case of AI we need like a strict liability regime would that be a good
84:30 - 85:00 option? Uh what are your thoughts on that? Yeah. Um I'm not such a huge fan of strict liability. I think the idea of enterprise liability though at the hospital level. So my view would be that in an ideal world there'd be some um sharing of liability between developers and hospital systems. Hospital systems would take the lion share but could avoid liability if they could show that actually the developers withheld information or they were unable to alter the model weights or something like
85:00 - 85:30 that. So some kind of forced information sharing. But it's it's possible that I think that even li strict liability any liability is not the way to go and even better to have a workers's compensation system where if you're injured no is involved you get a regimented amount of money you don't have to prove anything that's probably I think the first best in my view. Okay that that's actually an interesting suggestion. Thank you. Um uh Vera I had a question for you. Um so this is very interesting the whole
85:30 - 86:00 facial recognition and what you can use it for the good and the bad. But um I like one of the problems I see maybe um I guess especially when you're you come into the hospital you said you know maybe you can use facial recognition to recognize the patient um or get the medical records but if the person is unconscious for example they can't obviously consent to this yeah technology being used. So um what are your thoughts? I mean how can we do do we need something like for people who
86:00 - 86:30 have like a living will um do we need something similar for AI we that you know before you go into the hospital they might use this facial recognition um on you? Um well thank you so much for your question that comes again to the issue of the legal ground to use facial recognition or or to data processing which is not an issue that that is addressed by the act. This is a purely GDPR issue. Most people always focus on consent for everything in our lives. No,
86:30 - 87:00 consent is not the only legal ground. You know, quite frankly, the better one or the more treated one for healthcare. It's is usually used for healthcare delivery. But the GDPR does provide other legal grounds that eventually could could be used in this case. For instance, uh it does say that we can use you can process personal data for um for questions related with um the the life of the patient like uh the to save the
87:00 - 87:30 life of the patient. It sounds very dramatic but it's a little bit like that. Well, of course that if you are going to perform a medical act in the wrong patient, that can can be life-threatening that can totally jeopardize life or well-being or another possible legal ground in particular in light of article six could be the legitimate interests. Obviously, all healthcare facilities have a legitimate interest in provide good medical care in avoid liability litigation and that will be the way. But the question comes
87:30 - 88:00 actually more deeper is how suitable is the GDPR to deal with all these new technologies not only facial recognition but quantum computing metaverse and all that. So yeah we do have kind of a small issue in the youth. Thank you. Yeah. Yeah. I don't know we probably can't resolve that question today unfortunately. Um but we do have a question from the audience. I guess it's to the whole panel. Um given AI knows no boundaries, how do you suggest laws globally integrate? Um does anyone would
88:00 - 88:30 anyone like to comment on on that question? I'll just say that there's a complicated Cynthia's going to jump in but I'll just say one there's a complicated question about jurisdictionality and the question about processors and stuff like that that actually sorting this out is not simple. So even if you actually uh don't want to uh you know reach beyond your four corners that's a problem. Did you I I don't disagree with Glenn. I
88:30 - 89:00 would just say that for other issues we have tried to have international agreements but uh getting consent would be hard and I think it won't solve all things but we can at least that there is some precedent for doing that. Okay. Can I just add something? I think the expect the expectation of the EU lawmakers is that we once again will have a Brussels effect like we had with the GDPR and then again it was not universal because manager many states in
89:00 - 89:30 the United States have not adopted the GDPR like version of data protection law. I do think with the act this will not happen. uh but I might be wrong otherwise just just preparing a kind of agreement it has to be always very general because otherwise it's extremely difficult to have so many different countries with different legal traditions different values to agree in something just check for instance uh even the perspective of United States in Europe we are so more pro fundamental
89:30 - 90:00 rights where is you are more pro entrepreneurship and innovation how will that work yeah I don't think it will And uh the the just to build on what Ver said about the Brussels effect because it's true the GDPR even here in the Middle East basically followed. Uh I think it'll be I'm not a big fan of the AI act. I think for healthcare at least I don't know what it's going to do to help the patients who we should be thinking about. So I I'd be I'd be
90:00 - 90:30 disappointed if that model is followed personally. Anyway, thank you. Um well, unfortunately we're already out of time, but I would um like to thank the speakers and I would just ask you maybe like one final word from a word from each of you to uh wrap it all up very still. Do you have any final words for us? So since this is about research, all I'd say is um you know this is I think been
90:30 - 91:00 a tremendously helpful uh overview given by uh or and actually a detailed look at some of the legal issues by some of our colleagues. I think if you're going to research this area, it's helpful to look at where in the chain of research you want to get involved and what piece of the puzzle you want to impact and think about how that fits into the the broader framework. Um but yeah, so that's that's all. Thank you all for attending and and and listening. I'm sure the others have got more better words of wisdom than I do.
91:00 - 91:30 Thanks, Glenn. Just thank you for having me. Uh Vera, well just uh thank you so much. I think it's a wonderful time to be alive. So many things happen. I just wish I was younger. And Cynthia, uh yeah, thank you for having me. I uh I think from a research angle there is definitely not a shortage of things to continue to keep up with and investigate. So thanks for having us.
91:30 - 92:00 Yeah, thank you all. And yeah, it it sure with regard to research is very complicated, very complex. Um thank you for pointing out some of the resources that you can um that you need to yeah know about and that you have to research. But yeah, definitely every area is different and um it's not just one uh frame legal framework that we need to look at. Um so it's definitely good as an introduction to the topic and um yeah I mean you gave us good ideas um
92:00 - 92:30 how to yeah start and um yeah thank you all so much and um thanks for attending. Thank you. Thank you. Bye-bye. Take care. Like kitchen.