2025 03 27 AI Roundtable Panel 03

Estimated read time: 1:20

    Summary

    The 2025 AI Roundtable Panel hosted by the U.S. Securities and Exchange Commission focused on governance and risk management in the use of AI by financial firms. It featured industry leaders discussing current practices, challenges, and future risks associated with AI technologies in finance. Participants reflected on the necessary frameworks and responsibilities for ensuring AI's responsible use while maintaining a competitive edge. The conversation highlighted the balance between leveraging AI innovations and mitigating systemic risks, emphasizing ongoing engagement and education in rapidly evolving technological landscapes.

      Highlights

      • Val Sapanic kicked off the discussion focusing on governance and risk management in AI usage within financial products. 🌟
      • The conversation explored how AI governance practices are aligning with or diverging from other tech practices. 🔍
      • Panelists emphasized the importance of transparency, risk assessment, and education in AI deployment. 🎓
      • The demand for AI tools is growing with insatiable curiosity and competitive pressure driving adoption. 🚀
      • Firms focus on enabling interoperability and managing data privacy risks in AI applications. 🔓

      Key Takeaways

      • Embrace AI responsibly in finance to stay ahead of the curve while managing risks. ⚖️
      • Effective AI governance requires cross-departmental collaboration and education. 👥
      • Adopt a risk-based approach and learn from other industries' frameworks. 📚
      • Stay informed about evolving AI trends like agentic systems to prepare for future challenges. 🔮
      • Balancing innovation and regulation is crucial to maintaining market integrity. ⚙️

      Overview

      The 2025 AI Roundtable Panel hosted by the U.S. Securities and Exchange Commission brought together experts from various financial sectors to discuss the intricacies of AI governance and risk management. As AI technologies rapidly advance, firms are facing pressures both from internal stakeholders and the industry to implement these tools while ensuring rigorous risk oversight and ethical practices.

        Panelists highlighted that the adoption of AI in finance is not just about technology integration but involves understanding regulatory requirements, creating robust governance infrastructures, and prioritizing educational initiatives for employees. From cloud services to data privacy, the discussion underscored the importance of a systemic approach that goes beyond traditional models to accommodate the unique aspects of AI.

          Emerging trends such as agentic AI systems and deep fakes pose new challenges that firms and regulators need to be aware of. Panelists encouraged proactive dialogue between industry leaders and regulators to establish flexible, principle-based frameworks that support innovation without compromising safety or data integrity. The panel concluded with a call for continuous learning and adaptation to stay ahead of technological shifts.

            Chapters

            • 00:00 - 10:00: Introduction and Welcome The introduction begins with a welcoming message from an unnamed speaker who expresses hope that everyone had a pleasant lunch and opportunity to network. The speaker outlines the session's agenda, which will focus on governance and risk management in the context of AI in financial products and services. The discussion aims to offer insights from a panel of esteemed experts, addressing significant topics that concern both market participants and regulators. Val Sapanic is introduced as the director, setting the stage for the subsequent events.
            • 10:00 - 20:00: Panelist Introductions and Roles The chapter introduces the panelists and outlines their roles, particularly focusing on the SEC's Strategic Hub for Innovation and Financial Technology, known as 'FinHub.' It discusses FinHub's role in exploring emerging technologies that impact financial markets and preparing the commission for these changes. The panel aims to explore best practices in handling these technological advancements.
            • 20:00 - 30:00: Discussion on AI Governance and Risk Management The chapter focuses on AI governance and risk management, exploring key considerations for market participants in developing these practices. It discusses the similarities and differences between AI governance practices and those for other technologies, emphasizing the unique aspects of AI, especially emerging AI. The chapter also addresses the role of regulators in engaging with market participants concerning AI governance and risk management.
            • 30:00 - 40:00: Use Cases and Challenges The chapter titled 'Use Cases and Challenges' discusses the application and supervision of AI in financial products and services. It begins with an introduction where panelists are invited to present themselves, detailing their roles, titles, and their interaction with AI within their organizations. Conan French, one of the panelists, is introduced as the Director for Digital Finance at the Institute of International, marking the start of the discussion.
            • 40:00 - 50:00: Industry Practices and Data Governance The chapter 'Industry Practices and Data Governance' begins with a narrative from a speaker who joined the International Institute of Finance (IIF) about a decade ago, tasked with initiating efforts in artificial intelligence, data science, machine learning, as well as emerging technologies such as tokenization, ledger systems, and quantum computing. The IIF is a research and policy body collaborating with 400 global financial entities, including banks, insurers, asset managers, and payment companies. Over the past eight years, the organization's significant initiative has been to conduct surveys among its members to assess the incorporation and application of AI and machine learning in their operations. This data collection effort aims to understand practical industry usage and inform policy directions regarding new technological trends in the finance sector. The IIF's role in bridging the gap between technological advancements and policy is crucial, especially as it seeks to harmonize practices and encourage innovation across its network of financial institutions, thus emphasizing the importance of data governance in the rapidly evolving fintech landscape.
            • 50:00 - 60:00: Model Validation and Testing The chapter discusses the role of Amazon Web Services (AWS) in engaging with global standard setters and the public sector. It features Scott Mullins from AWS, who is responsible for the financial services industry. He expresses gratitude to the SEC for hosting the event and highlights AWS as an enterprise-grade technology company and a cloud services provider.
            • 60:00 - 70:00: Outsourcing and Third-Party Dependencies The chapter discusses the role of outsourcing and third-party dependencies in the financial services industry, especially with respect to new fintech companies and large institutions. The speaker highlights the wide range of services available from compute, storage, networking, and database services to more advanced options like generative AI services. This expansion in services is part of a broader conversation on innovation within the industry, as highlighted by Ryan Swan, Vanguard's chief data and analytics officer, who expresses appreciation for platforms like the SEC panel for facilitating discussions on these crucial topics.
            • 70:00 - 80:00: Defining AI and Risk Management Frameworks The chapter discusses the importance of AI and risk management frameworks at Vanguard, which manages approximately $10 trillion for 50 million clients. The chief data and analytics office at Vanguard is responsible for the entire data life cycle and serves as a center of excellence for data science, AI, machine learning, and behavioral science. These capabilities are used to responsibly drive business outcomes for clients, including personalization strategies.
            • 80:00 - 90:00: Regulatory Considerations and Recommendations The chapter 'Regulatory Considerations and Recommendations' discusses the importance of maintaining an offensive and defensive strategy to quickly scale capabilities. It features insights from Vanguard's perspective over the past four years. Additionally, Janna Powell from DTCC speaks on her roles in tech research and innovation, specifically focusing on areas like digital assets, Web 3, and artificial intelligence. The chapter emphasizes the ongoing evolution in these fields and the significance of panels like the one organized by the SEC to foster dialogue and innovation.
            • 90:00 - 100:00: Future Challenges and Emerging Risks The chapter discusses the challenges and emerging risks associated with new technologies and next-generation capabilities. It highlights the necessity of establishing proper governance and frameworks, especially in the context of artificial intelligence (AI). Within the Depository Trust & Clearing Corporation (DTCC), efforts are underway to develop and execute an AI strategy, which includes setting up pilots and moving them into production. Additionally, the chapter emphasizes the importance of strategic initiatives such as upskilling to ensure that AI becomes democratized across DTCC.
            • 100:00 - 110:00: Conclusion and Thank You This chapter, titled 'Conclusion and Thank You,' features Jeff McMillan, the head of Firmwide AI at Morgan Stanley. He expresses gratitude towards the SEC and humorously refers to Morgan Stanley as a 'small boutique financial services firm,' despite its established presence in the industry. McMillan reflects on the beginning of their innovative work in AI, which started back in March 2022, highlighting Morgan Stanley's pioneering efforts in the financial services sector.

            2025 03 27 AI Roundtable Panel 03 Transcription

            • 00:00 - 00:30 good afternoon everyone i hope you had a good lunch and had a good opportunity to meet each other and uh talk a little bit more and network um we're going to kick off the afternoon with a discussion of governance and risk management around using AI and financial products and services we have an esteemed panel of experts that will delve into some of the rich topics and ones that are top of mind for mark market participants as well as regulators so I'm Val Sapanic i'm the director of
            • 00:30 - 01:00 the SEC's um strategic hub for innovation and financial technology which is affectionately known as the Finn Hub um and we delve into kind of emerging technologies as they're uh on the leading edge and as they may impact the financial markets and so our job here at FinHub and with others is to come up to speed on those technologies and make sure that the the commission is prepared um so this panel is going to explore good practices that are
            • 01:00 - 01:30 developing around AI governance and risk management we'll try to identify some key considerations for market participants as they develop these practices we're going to discuss how these practices are the same or different from practices that have developed around other types of technology and explore how those differences may be tied to the particulars of um AI and in particular emerging AI and we're also going to address how regulators can best approach engagement with participants on this
            • 01:30 - 02:00 topic and supervision of it of participants around their AI use cases in financial products and services so first I'm just going to ask the panelists an easy question to go down the uh line here and introduce themselves uh tell us their role their title and how they work with um AI within their organization just briefly so let's start with Conan thanks and thanks very much for the SEC pulling this together today my name is Conan French i'm director for digital finance at the Institute of International
            • 02:00 - 02:30 Finance usually called the IIF and I joined about 10 years ago to launch our work on AI data uh machine learning as well as tokenization and ledger systems and quantum computing and we're a research and policy organization that works with 400 banks and central banks uh insurers asset managers payments companies around the world and one of the things that we do and have done for the last eight years is survey all of our members on their use of AI and machine learning in their operations um
            • 02:30 - 03:00 and we use that and and other research that we conduct for engagement with the primarily the global standard setters but the public sector around the world so thanks for having us uh I'm Scott Mullins from Amazon I'm Scott Mullins from Amazon Web Services where I'm responsible for the financial services industry like Cona I'd like to thank the SEC for having us here today uh to to participate in in this event amazon Web Services is an enterprisegrade technology company we're a cloud services provider we provide web
            • 03:00 - 03:30 services for organizations uh in the financial services industry from the newest fintech companies all the way to the largest systemically important institutions in the world uh we have uh the wonderful opportunity to surprise provide not just uh compute and storage and networking uh and database services but also uh today to provide generative AI services as well i'm looking forward to the conversation uh good afternoon i'm Ryan Swan Vanguard's chief data and analytics officer i also want to thank the SEC for putting on uh uh this panel i think it's
            • 03:30 - 04:00 a very important topic uh at Vanguard uh we manage uh about 10 trillion dollars uh 50 million clients uh and in the chief data and analytics office we're responsible for the entire data life cycle as well as a center of excellence for data science AI machine learning behavioral science and we and we use those capabilities to help us responsibly drive uh uh business outcomes for clients um personalization we'll talk a little bit about some of the things we do but our strategy has
            • 04:00 - 04:30 always been kind of this offense and defense kind of approach that allows us to um to scale capability uh quickly and so I've been at Vanguard now about four years and so happy to be here hi my name is Janna Powell and I want to echo uh the sentiment thank you so much for the SEC uh putting this panel on um I work at uh DTCC and I lead tech research and innovation and I really have kind of three meta focus areas one being digital assets and web 3 two being AI and that takes up um obviously a lot
            • 04:30 - 05:00 of our mind share and three emerging technologies and nextgen capabilities so within AI um we really have focused on setting up the right governance and frameworks across DTCC uh we've been setting up the AI strategy the execution plan running PC's pilots um all the way through uh to production um you know considering multiple other uh strategic initiatives such as um upskilling initiatives to make sure that we're democratizing AI across DTCC
            • 05:00 - 05:30 getting the right tools in the right hands of the right people i feel I'd be remiss if I didn't thank the SEC um so thank you u my name is Jeff McMillan i'm uh the head of uh firmwide AI at Morgan Stanley if you haven't heard us we're a small boutique financial services firm um but um you know our the work we've done really started uh actually March of 22 we uh were actually probably the first financial services company to
            • 05:30 - 06:00 partner with OpenAI we have a very unique relationship with them um you know just as a rumor that was nine months before ChachiBT uh became like a household world and you know for over three years I I I would say that I've failed at more things related to Gemini than most of you um you know it's a complicated space but we have learned a lot in that process um and uh I think we're now at sort of an inflection point as an organization where we're starting to see real tangible value uh from this technology
            • 06:00 - 06:30 and um looking forward to the conversation well thank you we'll dive right in so since the emergence of chat GBT and other technological innovations relating to AI there's been increasing attention and interest in the use of AI and financial firms and what seems to be a proliferation of general purpose and more tailored natural language processing tools other forms of AI as you've heard have been around um and used within the financial industry for for years but there appears to be um
            • 06:30 - 07:00 enthusiasm and competitive pressure I think driving demand around these new newer technologies which in some ways present new or heightened considerations around things like non-determinative models data quality and protection cyber security just to name a few um this in turn translates to investor and marketer protection issues which were of course uh of paramount concern to us at the commission um so in this first question
            • 07:00 - 07:30 I'd like to discuss uh what industry practices are developing around AI use case governance and risk management um exploring how these practices cover the AI life cycle from for example getting ideas within the organization of how these technologies might be used to implementation to testing to um deployment all the way into u monitoring and potentially deprecation um and then are there calibrations about
            • 07:30 - 08:00 um these risk management procedures and do those rely on for example the technology used um use cases uh or any other kinds of factors so that's a lot there but I think we have um kind of representatives from across the industry here so I'm really curious um what they each have to say about um AI governance and risk management how those practices are developing so let's start with Jeff so when we started um it was everyone
            • 08:00 - 08:30 wanted to participate and everyone wanted to have their point of view uh shared and in all cander you know the education level even today is quite low about the understanding of these tools um so and and and in the absence of clear regulation right there's you know there there's no edict yet from the SEC on what we can or can't do uh with these tools specifically although I would argue that a lot of the existing regulation is is quite applicable uh the first thing that we did was we established a set of uh guiding
            • 08:30 - 09:00 principles for our work so first of all um we require you transparency on the use of AI so if you produce something it's got to say that it came from AI you have to show the source information and using different vector approaches you can actually with a lot of specificity provide a tremendous amount of transparency on what generated that piece of content um which by the way is something that we actually don't always do with humans um we require uh governance so every use case goes
            • 09:00 - 09:30 through a um an oversight um I co-chair with with the head of research um we have second line functions which is no different than we've always had and then finally we um we require a robust evaluation process and for those of you who use this technology um it's incredibly easy to build and it's very challenging to deploy and when I say deploy deploy responsibly and you know we'll probably talk more about this but the traditional approaches that we use with our model review teams are are not
            • 09:30 - 10:00 sufficient to really govern this and actually in many ways they're they're completely not applicable but because the word model is in their in their organization's title they feel that they're accountable to do something but you really have to take u a more input output approach and then the other thing that we've done which has only been more recently now that we kind of had legs underneath this we've moved to a risk based approach some of these use cases are completely with I would say without risk but are very very low particularly things are human in the loop and some of these things are actually you know have
            • 10:00 - 10:30 have meaningful risk to your organization and I think going that approach and really requiring people to identify upfront what their risks are um in the process of saying I want to do this this this work prioritizing appropriately and then saying here are the mitigants that I have i think all of those things are are are things that candidly we should all be doing regardless of the regulatory framework so I just have a followup um are you seeing an increased demand within the organization across um domains for the use of these tools and and experimenting
            • 10:30 - 11:00 with different use cases and how does that translate to for example internal education that you'd have to provide to a different set of users as opposed to just data scientists that are used to working with data and technology i mean the the the the demand is insatiable um people are using these tools every day in their personal lives um particularly if you're under 25 you're you know you you're you're using these tools like you know they're your hands and I think
            • 11:00 - 11:30 um so there's no there's no problem with demand right the problem to your point is making sure that people are appropriately trained they understand what they're doing we actually do prompt engineering training we've probably trained over a thousand people in the techniques of prompt training and I think you we were just actually talking this before you know your data scientists are actually in many ways not the right people to do this work um so we've actually spent a lot of time training what I'm going to describe that 20-year-old set to use these tools in an
            • 11:30 - 12:00 appropriate way in a governed way and um they just want to run right and I think the challenge there is just making sure they run um in a controlled transparent uh way so that what they're producing is actually uh meets the needs of the business thanks Jeff i'm gonna move to Janna now wonderful thank you so I think much like Jeff you know we're we're seeing a lot of similar trends across the industry i think um you know a number of them are establishing dedicated governance
            • 12:00 - 12:30 functions for onboarding AI technology um you know it's a brand new technology obviously you have to think about it a little bit differently um we have teams who are creating this is not just in DTCC but uh you know partners we see across the uh the industry creating AI specific uh risk management frameworks risk assessments um life cycle uh oversight committees and um efficient operating models but you know I think it's really important to make sure that
            • 12:30 - 13:00 you have oversight of the entire life cycle of the AI technology from development to deployment to uh production to you know essentially monitoring post-p production and so you know at DTCC we also started right out the gate with publishing an AI policy um and that was to just clarify um you know guidelines around what is appropriate AI usage what are the risks what are the benefits we launched uh mandatory training programs across TTCC as well um
            • 13:00 - 13:30 we formed an AI council and I think the um really important thing about this was not only that it's a governance body focused on AI and onboarding AI technologies but that it had deep strong representation across every single department within DTCC we also uh did the same with our AI enablement team which was responsible for ensuring that the technology we onboarded DTCC is aligned to our AI policy
            • 13:30 - 14:00 um we established various working groups like the responsible AI working group various processes like um rapid experimentation process so what that enabled us to do was experiment in a much more expedited way than we otherwise would have been able to before but in a very safe way um so you know I can't stress enough the importance of having representation across the board as early as possible because this technology you know it's it's so
            • 14:00 - 14:30 ubiquitous now it wasn't quite as ubiquitous last year and the year before but it's you know it's going to continue growing it's going to impact every single facet of the world and if you think about it there are going to be complex problems that need to be addressed uh from a legal perspective from a technology perspective from a risk perspective from a compliance perspective from a business perspective every single department is going to be touched so you need to make sure that um you are engaging the right stakeholders
            • 14:30 - 15:00 at the the right seniority level that um even though at first it might come across as complex and uh you know overhead heavy um it's it's almost like you know what uh what the military special ops forces say right um slow is smooth is fast and ultimately that enables you to move faster if you do all of that work up front um so we you know work also with key control functions to you know establish what the frameworks are for example for technology risk management we have a uh you know
            • 15:00 - 15:30 essentially a controls matrix that defines how we should be thinking about onboarding technology based on various risk levels um and obviously these practices can be calibrated depending on the risk of the specific use case risk of the data whether it's um externally facing whether it's internally facing and so on but um that's that's essentially how we've been approaching it and I think it's consistent with um how we've seen most players in the industry approach it
            • 15:30 - 16:00 so just to one followup um you mentioned that there needed to be um individuals from various domains within the organization as part of the team so that they're understanding all the issues that come into play when it comes to some of these more complicated systems um can you talk a little bit about lessons learned or challenges you faced in making an enterprise level governance risk and risk management uh framework and you know the kind of buyin you have to get you know and tone from the top to
            • 16:00 - 16:30 to to accomplish that yeah um well I think uh inevitably there are going to be a few you know gaps as you're creating something a little bit new you're trying to also fit it within some of the existing processes some of the existing uh you know control functions um and you're also trying to be as efficient as possible so um you know there were a couple of times when uh a particular stakeholder probably needed to be engaged earlier than they were engaged right and so we we learn you
            • 16:30 - 17:00 know there are certain problems that uh maybe the team didn't quite see a certain nuance of until maybe a couple of months later so that um you know that obviously can be a challenge i think um the other challenge was that you know there there were already uh certain processes and governance functions in place um and we really needed to kind of break those apart a little bit and think through how we needed to modify and augment um you know but also streamline
            • 17:00 - 17:30 so that we were as efficient as possible in in getting um at least experimental uh initiatives through Ryan thanks yeah thank thank you for the question i think it's interesting when you when you in the in the beginning of your question you talked about um you know the competitive pressure and the fastmoving pace i think the other thing we're seeing is client expectations are changing right so it's not just the competition but it's hey just like you said the the our crew our our employees
            • 17:30 - 18:00 but also our clients are using AI in their everyday life and so when it comes to how that shows up in the financial industry I think expectations um are starting to evolve uh and we and we see that we see that loud and clear vanguard um what we did about two about two and a half years ago we started to experiment rapidly but responsibly in in a few key areas primarily around knowledge management finding information uh and synthesizing information making it easy
            • 18:00 - 18:30 for crew to find that information uh content creation uh as well as code generation helping our software developers uh be more productive and so what we did was very similar very similar to my colleagues was we created a genai steer code that is cross divisional uh in nature so in that I chair it um but there's there's risk partners there's business partners there's legal there's security there's technology architecture and what they're
            • 18:30 - 19:00 doing is they're they're looking at those use cases and they're doing really two things one they're doing those security assessments to say hey um where is our data going is it a third party is a fourth party involved hey is is it within our our our ecosystem and how are we monitoring and testing whether that the capability is actually working the way we intended it to that's the first thing they're doing to help the the organizations that want to that want to take a genai use case take a a traditional AI use case and kind of move forward but they're also looking for
            • 19:00 - 19:30 patterns um and the patterns are important because what we find is depending on the type of data that you want to use and depending on the type of platform is in I'm able to provide kind of a a a move fast but responsibly right and so very similar use cases that are using similar type of data so we've classified all of our data you know the PII of the world all the way down to public information depending on kind of the risk assessment of the data the
            • 19:30 - 20:00 ecosystem that you're using it in we're actually able to give crew uh uh uh different ways to go about their pilot their their use case um if it's very novel and we need to look at it then we obviously we we we take it through a full assessment but it's that that allows us to really understand and focus on the areas where the most risk the most risks are and so that that gener also elevated the need for AI literacy kind of across the firm so two
            • 20:00 - 20:30 three years ago about three years ago we rolled out kind of a data literacy uh uh enterprisewide program to help senior leaders and uh across the organization build data literacy we since have morphed that into our AI academy that helps build uh capab uh capabilities and understanding of things like prompt engineering uh that that Jeff talked about that helps our business leaders and our business partners really understand AI and how it works and the risk associated with it so they can be
            • 20:30 - 21:00 partners with us in the process right so that we we kind of shift the kind of the risk assessment and and sometimes uh that security assessment left in the process meaning closer to at the point of creation versus a afterthought after we've uh built something and want to deploy something we kind of bring them along the entire way and so that's kind of how our genai stairco uh kind of was was set up we also very similar to my colleagues um established guiding
            • 21:00 - 21:30 principles on what we what we were willing to use within the firm what we were willing to use externally with third party uh with third party providers like AWS and others um and so that has allowed us to to really rightsize uh the risk and controls based off of the like the risk assessment and where we think uh the high risk or critical risk are and so that's kind of how we think about it and one thing you mentioned that brought a question to mind you you mentioned that you know the
            • 21:30 - 22:00 younger younger users or newer users um are much more fluent in digital technology and you've got your clients you've got people within your firm using this stuff have you had to um address an issue of potentially people taking work home and working on uh non-firm systems to and then have having to address those kind of data privacy um and data security issues yeah you know so within
            • 22:00 - 22:30 within our ecosystem our SISO and our global technology office um we have a an environment that protects us from folks taking things out like off of our network or outside of our our our boundaries if you will um and so that really hasn't been a a challenge for us but um the I I have seen in our industry where um if you um don't continue to evolve right and don't continue to mature the technology within your your ecosystem that does become a bigger risk
            • 22:30 - 23:00 why because um you know the the the recent graduates from colleges today they're using these technologies in their school and they want to work at places that allow them to use the greatest technologies you we just have to find a responsible way for them for them to do it so it it's not a problem that we see at Vanguard but it is something that I see as a risk in our industry if if our um larger institutions historic institutions don't you know continue to evolve the technology that we use so thanks um and so I'm going to turn now
            • 23:00 - 23:30 to Scott and see if he can bring some perspective from AWS to this discussion well I would I'd start by echoing what you've heard from Jeff and Janna and Ryan in relation to how the industry itself is is moving to adapt these new technologies and the wonderful thing is is that for many organizations uh they're now in the second decade of cloud adoption and what cloud really represents is just modern technology uh of today and so um as many people talked about uh developing really what we would
            • 23:30 - 24:00 call maybe a center of excellence where you're bringing crossf functional teams from across the firm uh to look at existing business processes to look at existing governance existing policies existing procedures that's been going on within the industry now for uh almost 12 years we're in the second decade of cloud adoption and so many many organizations like like the three here on the stage with us already have this type of policy and procedure and governance in place what's that enabled organizations to do is actually to apply that to this new and evolving technology
            • 24:00 - 24:30 for data and analytics and so we're seeing that as a theme across the industry where you have good technology governance and policies and procedures in place uh and it's a riskbased approach to the adoption of that technology those organizations are able to actually move a lot faster than those who've not yet made those first steps in actually modernizing their underlying infrastructure and many many of the points that have been made uh by my fellow panelists um ring true across the industry today taking a very riskbased
            • 24:30 - 25:00 approach to looking at use cases the use cases haven't changed in the industry the use cases are still the same but what's evolving is how you address those use cases and I think what's um important for all of us to remember um is that there's always a human in the loop from the standpoint of the decisions that we make in relation to what use cases we apply this technology to i'll give you an example um in the earliest uh phases of adoption of generative AI uh we were talking to many
            • 25:00 - 25:30 of our customers about what use cases are you exploring how can we help you with making sure that you're adopting this in a very responsible safe and secure way and by and large many of our largest institutions said to me "Scott we have a 100 use cases 95 of them are about our own internal productivity how do I equip my teams to do more um with their time and spend less time on tasks?" And so what that translated to is um maybe what you've heard about um coding assistance being able to actually
            • 25:30 - 26:00 um give more time to developers to actually write and develop code rather than to having to debug code or having to actually uh go in and do test scripts against code and so some or organizations using tools like uh our arc tool called Q for Q developer um have saved as much as 40% in productivity from the standpoint of their developers being able to actually take undifferiated heavy lifting away the tasks of going through line by line and debugging code or running those test
            • 26:00 - 26:30 scripts equally important and this is something we do at at Amazon uh today ourselves is being able to actually give people access to information across an organization based on their own internal permissions for information um I don't know about you but I I work at an organization that has many different silos of information and I've worked at financial institutions that also have many different silos and sometimes it's hard to get access to information that you might need to actually perform your job function even if you have entitlements to it and so what we've
            • 26:30 - 27:00 seen is that more and more organizations are looking for assistance and agents that can help with getting access to information that you need based on the existing permissions that you have using existing policies and procedures and governance and at Amazon we use our own tool called Amazon Q for business and my team uses it not only to access knowledge stores but to help them actually improve on the tasks that they're doing we're a big writing culture we don't use um slides to present ideas we we present narratives and so even for myself when I'm writing
            • 27:00 - 27:30 a narrative that I'm going to present uh to someone I'm throwing that into Q for business and I'm asking it to help me improve that much like a developer would for their code and so I think what we're seeing first and foremost is first the adoption of this technology to make improvements in our own productivity and now organizations are looking at those business use cases where do I feel comfortable from a risk mitigation perspective and allowing this technology to perform more and more tasks that are businessoriented just to follow up on that um so one of
            • 27:30 - 28:00 the big considerations for a firm whether it's large or small is how to deploy uh technology and and how how to either develop inhouse or use a vendor or a platform um to use you know internally generated uh technology or to use open source models what what are some of the considerations that your clients are are facing and going through when they're determining you know what to do in-house what to do through a platform
            • 28:00 - 28:30 there there there are really three main decisions to make when making a buy versus build decision um your risk appetite um the speed at which you want to do something and the cost i'll give you a real life example from a couple of months ago i was having a a conversation with a CIO of a of a of a large bank and he was telling me about their adoption of generative AI and their work to try and build a platform that they could deploy to the entire organization um that platform would have built into it
            • 28:30 - 29:00 their own controls all their policies all their procedures and the governance they wanted across the organization you would enable a developer or a business user to create an environment and in that environment you could uh bring an open source model and you could actually use that model in a safe and secure way where your data would not be shared outside of that environment you'd have your own guard rails right there for you and he was expressing to me the frustration that he had that this idea that they had internally uh was not coming to life they'd spent two years
            • 29:00 - 29:30 building this uh particular platform it was not yet in production and they had spent tens of millions of dollars of development time actually trying to bring this to life and I said to him I have I have an offering and a service that can meet those needs right now and where I think organizations have to make decisions about is where where is my energy best spent where are my people's time best spent where's my capital best spent and am I spending
            • 29:30 - 30:00 that on things that are durable and unique to the firm and the value that we present to customers ryan you mentioned the demands of consumers and how those are changing rapidly am I investing in areas that can actually deliver return on investment or am I inventing something that I could go and get from a vendor that's already proven it already meets my requirements from security and a soundest perspective and so that I think is the big decision point for people is where can I invest to get the best return at the speed I need to
            • 30:00 - 30:30 actually meet the changing demands of my customer base thanks um Conan to you i I I know that your organization recently did um a report um that you've had broad input from a number of um constituents on that report so I'm wondering if you could comment on this area any findings of your report or any trends that you're seeing in the industry well I think reflecting on the answers the panel has given so far um certainly indicate what
            • 30:30 - 31:00 we've seen as we've looked across the industry and that has been a careful cautious and very responsible approach to these tools um you know risk assessment uh coupled with looking at the AI tools being considered and the activity being applied to uh and really trying to understand the data involved what's being used as some of the other panels have uh explored today there's been you know rapid evolution that just keeps accelerating in AI where we saw you know again a a playbook for predictive AI inhouse that was really
            • 31:00 - 31:30 well established and then um starting three years ago again just rapid and continual change with uh large language models open source um changing assumptions uh about the economics involved so it's it's been a time of trying to stay you know on top of it and making sure that that responsible approach is applied to how these tools are controlled and used and I think that's one of the questions going forward is how does the industry um stay at the forefront as it has been for a very long time in making sure that the
            • 31:30 - 32:00 best tools are are brought to um the benefit of of their customers and the economy at large um so again we see that that cautious approach we also see an elevation of governance and and oversight and one of the big changes in the last three years has been um we we now see 74% of the firms um that were involved in our report have a seuite officer who is responsible for AI governance and oversight and that was something significant and gives you a sense of again the strategic um considerations that are are being um
            • 32:00 - 32:30 weighed as uh as these tools are are being evaluated and deployed do you have any insights into um I mean it seems like the resources required to put some you know effective governance risk management pro procedures and policies in place could be large for any particular firm do you see huge differences with sizes like larger firms smaller firms uh anything we should be looking out for here here at the agency
            • 32:30 - 33:00 i don't think that there's been a particular signal you know as you indicated um staying at the forefront requires a lot of resources people have talked about you know upskilling um and that's certainly something that I think the public sector is probably facing as well as you uh try to make sure that people understand what's going on in code um what we're talking about in in data data treatment and and flows of data so I think um it may not be a a simple answer um about resources or this
            • 33:00 - 33:30 cohort uh really seeing success and others not um so I think there's uh you know diverse set of answers um and it really you know just depends on making sure that you're you know focused in the mission trying to understand what you're you're doing with these tools and uh have the right skills um to execute the mission ahead so I'm going to throw out to the the entire panel and whoever wants to answer can answer um I think we started off by
            • 33:30 - 34:00 in the in the first panel talking about how one might define AI as you're setting out a riskmanagement framework do you find that you have to define what type of AI you're talking about um and put um policies and procedures around it on a technology um specific basis or is this simply all AI goes into these risk frameworks so anyone who wants to speak can can go ahead on that
            • 34:00 - 34:30 well maybe I'll just offer an observation that one of the things that I've certainly heard from industry is maybe a good starting point is defining what's not AI and you know has been alluded to there are lots of things in modeling um that are classic predictive and again the the rule book is uh very well understood so um perhaps a first step is ruling out those things um that don't have novel issues uh that need to be considered and and that's usually a
            • 34:30 - 35:00 pretty good first step um I agree with that i uh you know I think what um what we started with when we were thinking about you know model risk management and and how we need to actually be you know thinking about uh these new gen AI models was just the the stark difference uh between deterministic models and you know the new generative AI models that have opacity that are you know like a black box and are not necessarily predictive
            • 35:00 - 35:30 in nature and um you know so how do you think about augmenting your governance frameworks to um adjust for that um so that was actually you know quite a challenge for us that um that we wrestled with for for quite a while but I think you know you need to think about um you kind of need to just adjust your frame frame framework and um your mindset and uh and and overall perspective and you know think about um model interpretability you need to think about um performance and accuracy monitoring you need to think about um
            • 35:30 - 36:00 you know different things that uh you consider when you're building a ragbased model for example um what is a ragbased model it's essentially you know leveraging a third party LLM for example and uh incorporating your internal uh data from your firm um right and essentially being able to leverage a queryable database that's something that's inherently very different from the deterministic models uh that we've been working with before so you know how can you be you know thinking about
            • 36:00 - 36:30 evaluating a model like that but in a meaningful way um not only for model risk management but you know also for the developers and thinking about how effective is this model um you know so one framework I really like uh for that is the ragas model that um you know essentially helps uh assess metrics like uh like answer relevance like uh you know context relevance um accuracy um retrieval relevance and so on and that that has helped us a lot it's that that
            • 36:30 - 37:00 goes a bit further I think than just you know defining uh AI but you know you kind of have to think about what what are the implications in that yeah well just to add on to my colleague I couldn't agree with you more i think the one thing I would add is some of the lessons that we learned is that we have to continue to renew and think about our risk our riskbased framework because the the technology is constantly changing and it's not a it's not a one-sizefitit all the same type of controls that
            • 37:00 - 37:30 you'll use for deterministic models that you'll use for generative AI models but there are um automated ways that allow you to um reduce the risk but also extract synergy ies extract value and so the the importance of the cross divisional kind of the crossexpertise kind of stereo to help identify what the risks are but then use a risk framework to say okay based on how we uh you know assess the risk what
            • 37:30 - 38:00 are appropriate controls or measures we can do to reduce that risk to a acceptable risk appetite and then and then make a decision on whether or not um we want to we want to go forward on it based off of the value that that capability is actually bringing to us and so it because the space is moving so so quickly I think it's very important to have a framework but also to revisit it as as technology and new technology comes into the enterprise um as you as you move forward So
            • 38:00 - 38:30 so one thing that we've heard a little bit about is uh deterministic non-deterministic models um I think when you're talking about large language models um in foundation models you're dealing with complexity um and unpredictability in certain circumstances and so I wonder how are firms approaching things like model validation testing and monitoring um are
            • 38:30 - 39:00 there measures that are in place that are are used to to to back test models what are the best way to test models does it depend on the model in the use case are you looking for things within a range or unacceptable error rates what what kind of um metrics are kind of developing around that area yeah I'll I'll start um so again for people who aren't aren't familiar the deterministic models are able to be
            • 39:00 - 39:30 looked at right you're able to see inside them you're able to see the weights and you're able to see what you got and why you got it the large language models you can't there's billions of nodes and you don't know why necessarily this thing this word came after that word so there's a couple of techniques that we apply so first of all it's all about inputs and outputs um you we generally start um we have a software package that we work with that does this
            • 39:30 - 40:00 that will take um 600 inputs and 600 outputs that have been 100% validated and then you essentially put that into your model and say did I get the same answer and generally speaking you will not on the first step so you have to go back and using prompting um over and we don't fine-tune we don't fine-tune any models uh at Morgan Stanley but you go through prompting and then you get to a point where you're consistently producing um accurate results the second
            • 40:00 - 40:30 thing you have to do is you have to know how good your existing processes are which by the way is a challenge for a lot of people because they don't know how good their contracts are or how you know how accurate are they settling securities so you have to kind of get a baseline and then once you've done that then you give it to the wild right you have to get 20 30 40 people and this is an important point you cannot outsource this to your most junior people because the AI is only as smart as your smartest people and culturally most managing directors at Morgan Stanley are not
            • 40:30 - 41:00 involved in testing but we've had instances like when we did our bot we had over 20,000 pieces of feedback over a 9-month period and in many cases there might be only one individual in the entire firm that was able to tell us the answer to that question and whether it was good or not and being able to access those people in scale is is a significant challenge but just to summarize you got to know what good looks like you have to get your good people involved you have to take both a quantitative and then a qualitative
            • 41:00 - 41:30 approach and then you have to share that output and people have to demonstrate that they've done that work in an appropriate way based on the risk level that um that's been defined i mean it just sounds from your description that that entire process takes a multid-disiplinary team so you've got to have folks who know the models know the data and know the domain for which the application is being applied is that that's right and and and it's not to get too much in the weeds but you have to
            • 41:30 - 42:00 have the person who owns the problem you have to have their first line risk team you have to have the second list risk team and then you might need to have 20 or 30 other people that need to be involved in you know in in evaluating um and you know again the technology give me a problem i I I can write you a solution in an in in an afternoon but getting that to the point where people can say yes I'm comfortable with the quality and by the way I want to be clear like we don't hallucinate like this we're not seeing random questions
            • 42:00 - 42:30 but but that's because we've taken a very precise approach using vector databases to upload the relevant material and then creating the transparency around what you get and why you get it yeah I'll just echo that i mean we we have a very similar process to what uh Jeff just described and um it is it is incredibly surprising to see just how much your endto-end testing process has to change versus you know what it uh what it used to be for just classical
            • 42:30 - 43:00 problems um so I mean the the the implications for what has to happen any ETO testing is uh is is just massive so what we're doing now is you know bringing all of the experts together and kind of figuring out what the testing needs to look like now for a a Gen AI application incredibly different you have to establish um you know what's called a ground truth which means you have to come up with uh you know a list of 20 30 40 50 questions um that
            • 43:00 - 43:30 everyone agrees on and that at least someone knows the exact answer to and can validate it then you run it through the model and you run it through the model multiple times to see how often it answers the question correctly versus how often it answers the question incorrectly it gets a numerical score and then you have a sense for how well your model uh performed right but that is is very different from uh from how classical testing and worked and those things can drift right those things can
            • 43:30 - 44:00 change over time depending on like the use case for that individual model so you could you could you could do that with with one model on one use case but then testing it against a different use case with a different set of questions it could it could perform differently and so having that as part of your process and having an agreement of like these are the inputs these are the outputs that I I expect based off of a use case you know that doesn't mean that that model should be used for all different types of use cases right
            • 44:00 - 44:30 depending on the criticality you may need to test and validate the model in different ways and so um we We use a very similar approach we do it at the enterprise divisional level um first line uh our risk partners are involved um we even do peer reviews for lower risk models um that are not in critical business processes so there is some uh you know some some kind of uh uh classification if you will that helps us uh scale um but it's it's a ongoing it's it it it's not a oneanddone type of
            • 44:30 - 45:00 thing yeah and I would just say that to do all of that uh clear data lineage and clear model lineage are really important across the full model development process and then you've got to have the right model monitoring and guard rule capabilities to make sure that the model is performing as intended yeah actually I was going to follow on to that question um you know some of these models that have been trained on historic data will work really great
            • 45:00 - 45:30 under normal conditions but then when there's an unpredicted event or some black swan event um they don't perform so well so how does a risk management framework take that into consideration is that through stress testing or red teaming or combination of all kinds of testing through unexpected circumstances or synthetic data i wonder if folks could comment on that
            • 45:30 - 46:00 i mean I I think there's a number of things you said you said a few of them right obviously stress testing scenario stress testing sensitivity analysis um you know bias assessments um there's also you know once you have everything in prod you can uh you can implement continuous monitoring frameworks um that can detect model drift um degradation in in performance or unintended bias over time um so you know you can integrate them um into your uh into your
            • 46:00 - 46:30 production environment i was just going to add the one thing that we found is that you can use the AI to monitor the AI um and create more scale now there's some some techniques you have to be really careful about to do that but you know one of the things um that I believe as an industry um AI is ultimately going to be a risk reduction tool right everyone's talking about all the risks and concerns but what we see with this technology is any process has a wide standard deviation of output you have a
            • 46:30 - 47:00 group of people that perform at a very very high level you have a group of people that are kind of okay and then you've actually got a long tale of folks that are maybe new to the job right that candidly don't perform as nearly as well and what AI is able to do it's able to reduce the standard deviation on quality which allows you to make sure that you have more consistent outputs and the AI can help you measure that that process if done in in an appropriate way just want to address a a question at Conan um what we've heard here takes an
            • 47:00 - 47:30 incredible amount of talent and knowledge and upskilling within an organization do you have any insights from the work that you've done about how firms are dealing with talent scarcity and training their folks appropriately to do these very complicated complex tasks well there's certainly been um an allocation of resources uh 100% of the firms uh surveyed this year had an increase of resources um focused on the
            • 47:30 - 48:00 problem uh more than 50% of them had had a 25% uptick this year in the resources allocated um so again I think the combination of that top of house um strategic oversight and view uh plus allocating those resources gives you a signal um of what everybody thinks is possible not just in some of those efficiencies that we heard about um this morning uh but again just in in better outputs and being able to keep pace with um serving your customers with the you
            • 48:00 - 48:30 know insight and anticipating what um they need the other part of the uh solution I think is sitting to my left and certainly uh third-party partnerships is something that financial services have been very expert in managing for a very long time um as you mentioned as these tools and and models and the whole point of them evolves where maybe not understanding where all the training data came from and not understanding exactly how the algorithm works is exactly the point um you know
            • 48:30 - 49:00 is is a step change that's been required over um the past uh year or two but I think that the industry and its technology partners are working um together pretty well to understand that uh to adapt some of the techniques that again had become very well established in the industry over 20 years for uh model governance and think about you know what is applicable um what's already you know well understood and defined in the activity you know rulebook for for um the industry and to figure out how to evolve these
            • 49:00 - 49:30 techniques and these approaches for the new tools so I think that work is well underway um and the resources are being allocated where they're needed thanks and and you brought up another point I wanted to follow up on so you know given the complexity of some of these models um and and the need for good data a lot of data good models and you know sound infrastructure I think a
            • 49:30 - 50:00 lot of firms are turning to third parties and outsourcing these things so just wondering if you could comment on um any any trends you've seen in terms of outsourcing and any concerns you have with kind of third party dependencies or concentrations that are building up with certain vendors providing you know well that that's certainly something that I think the regulators around the world have their eye on uh they are you know as you heard this morning concerned about u model
            • 50:00 - 50:30 convergence if there's too much reliance um would that you know stress the the market function in long term um but I think that sometimes as as Jeff had mentioned there there might be too much of a focus on novel risk and frequently what's driving some of those risks also holds the the answer and so when it you know comes to AI understanding and cleaning data um tracking some of the you know input and and outputs um these are all powerful capabilities of these new tools and so I think you know a lot
            • 50:30 - 51:00 of the answer has has kind of emerged uh I remember you know a few years back uh at a conference like this a senior official had mused you know well you know humans are are kind of a black box and we allocate a lot of decision-m uh authority to a lot of humans who can't really explain why they made that bet on that day or or sort of what was going through their mind um but we've managed them over time we created some guard rails uh we measured their performance
            • 51:00 - 51:30 and their output and we increase uh the responsibility that we allocate to them over time and so I think that you know sort of very careful cautious um timed and responsible approach is is what we see um being employed today in financial services um but the question of how do you keep on the forefront then I think is uh the flip side of that coin so you know a careful responsible approach means as you know things rapidly evolve with AI agents um being released and
            • 51:30 - 52:00 many of our customers might be represented by their agents in the future uh and how do we keep pace with that landscape and I think the next panel is going to bring some of those issues in it's really interesting i wonder if the panelists could comment on how risk management takes into consideration things like concentration outsourcing and thirdparty dependency risk as well as things like AI supply chain risk and related problems but not to put anyone on the spot you don't have to comment about what your firm is actually doing but just things that
            • 52:00 - 52:30 firms may be thinking about u as they think about these these considerations for their own risk management yeah it's it's it's it's a balancing act i think one of the things that comes to mind for for us at Vanguard is interoperability right so uh from a technology stack perspective we have a micros service architecture which means um we use a lot of APIs and so as we build in capabilities we think that like we know that today we're using the worst generative AI we will ever use right so
            • 52:30 - 53:00 if you know that to be true then you know that you have to build from an interoperability perspective so that you're able to pull things out pull things out put other things in as the technology continues to evolve you build more capability and there is this balance between like what do you build versus what what do you buy um and that and that continues to that continues to evolve too but if you if you're building in a way uh uh whether it's with thirdparty solutions or internally built
            • 53:00 - 53:30 solutions you're building in a way that allows you to be interoperable then I think that that kind of derisk is your derisk the sunk cost if you will kind of of your future innovation um and that's that's how we think about it yeah I think another I I agree with everything Ryan just said but I think um you want to be careful as well about vendor lock in and you know overdependence on one particular vendor so I mean I think there there are a lot of um you know initiatives in place and practices in place uh already that I
            • 53:30 - 54:00 think a lot of firms across the industry have to manage uh third party risk like you know robust vendor management of you know comprehensive due diligence um uh appropriate um you know protections in contracts particularly as they relate to AI um you know ensuring that uh we have ongoing uh third party oversight ensuring that um you know going forward we have the right to potentially um audit the the models um you know uh that
            • 54:00 - 54:30 we um ideally have data usage governance as well um you know that's that's something that is um extremely important I think to a lot of firms you want to make sure that your data is going to be segregated you want to make sure that your model instance is going to be segregated um so there are a lot of things I think a lot of promises that um firms can have third party uh vendors uh essentially promise contractually um to uphold
            • 54:30 - 55:00 yeah I would say from our perspective at Amazon um everything that we provide to our customers is about choice and making sure that they have choice and not just um the components that they take from us so for instance uh when you look at the offerings that we have from a generative AI perspective um we have three different tiers of those offerings the first is at the silicon level uh where we actually offer our own uh chipsets to customers and so you can choose to take a chipset from Nvidia and run Nvidia
            • 55:00 - 55:30 GPUs in AWS or you can use our our chips for training uh or for inference as well so giving uh flexibility in in the way that customers can choose what chipset they want to use at the next level we offer a platform called SageMaker and SageMaker is for folks that want to build and train their own models if you want to get into the actual components and actually building uh those models and training them yourselves you have that flexibility to do that and to build and train your own models and deploy them in a safe and secure environment and at the next layer for those folks who aren't data scientists or don't have
            • 55:30 - 56:00 an army of data scientists on staff and I don't think that many people do although some do um we have something called Amazon Bedrock and Bedrock gives you model choice flexibility and so you have the ability to bring models to bearon Ryan like you said being able to switch out models depending upon how your use case changes or how model performance changes over time and so being able to say you know I want to use um Claude from Anthropic or I want to use uh Llama from Meta or I want to use a model from Hugging Face or I want to use the Amazon Nova model that gives
            • 56:00 - 56:30 organizations the choice to be able to bring models to bear take them out when they don't want to use those models anymore they want to change the use case and so our focus is on number one flexibility and choice for our customers um on top of providing a very safe secure sound walled garden for them to do this experimentation in so I'm going to throw a tricky question out there um you know we hear a lot about risks that are more macro or systemic so if a firm if a number of
            • 56:30 - 57:00 firms are using the same models in the same data sets in the same way that you could have um potential you know cascading effects or a flash crash or um selfreinforcing feedback loops and and something you know more systemic happening in in the market so I wonder if firms at the individual level level are considering stuff like that in their risk management considerations or is that something really for regulators and others to be paying attention to and if it is the latter what are some of the
            • 57:00 - 57:30 metrics or indicators that regulators should be paying attention to when it comes to some of these broader risks that might develop with hurting or collusion or things more systemic in the market let me just start by saying if if there's no alpha there right so I think there's an embedded there's an embedded disincentive for firms to do that because if we're doing the same thing that everybody else is doing we're going to have a very difficult time
            • 57:30 - 58:00 creating return for our clients um I'm not suggesting that may not be an issue maybe for particularly for some of the smaller organizations but from our perspective we would want to do the absolute opposite of that uh we don't want to use shared models we don't want to use shared data and the reality is that I'm less concerned about the models and I'm more concerned about the inputs and the question is is are you bringing inputs that are truly distinctive are you leveraging your own intellectual
            • 58:00 - 58:30 capital or are you leveraging the intellectual capital of some generic um you know market solution um so if I'm a regulator I'd be more focused on the inputs to those models and again I think that there's a real disincentive for organizations to get on board with the same tools because to the point I made you're just going to get the same returns as everybody else sure and I think diversity is probably what folks want to be the end result but um you know there is opacity between firms no
            • 58:30 - 59:00 no one's sharing what people are doing so to the extent they're using the same vendors or the same platforms we could see a buildup of risk and that's kind of what I'm talking about and it's probably something that the firm may not have any insight into but what should we as regulators be worried about but I take your point you're not looking to do the same thing that all your competitors are doing um Janna yeah that's I mean it's a it's a definitely a fair question and I also agree with uh with what Jeff has
            • 59:00 - 59:30 said you know obviously we we don't want to be doing what every firm is uh is doing but we you know I think are taking a very riskbased approach we're using segregated models we're using you know ensuring that our data is segregated um we have uh you know we're developing a pretty robust uh data governance program and uh you know whenever we are building out any kind of solution we start in our AI sandbox which contains about 20 plus different LLMs um and what we do is we actually test the performance we test
            • 59:30 - 60:00 the um response time we test the accuracy across all of them for all of our different use cases so it's not likely that we're going to be using the same exact LLM for every uh every use case now it might uh it it might be that there is one LLM that just emerges in five years as the beall endall LLM that every single person uses at which point you do have that you know potential risk but um you know I think we are taking a a quite diversified uh you know independent approach um and very you
            • 60:00 - 60:30 know objective uh analysisbased approach to you know ensure that that we do have kind of widely uh variable uh models that we're implementing in different use cases yeah let me I would add definitely agree with my colleagues i think um at the end of the day it is about the inputs your data advantage is your AI advantage like majority of financial institutions u we we won't create foundation models right
            • 60:30 - 61:00 like you know yes there will be a set of companies the you know uh that will create models um but when you think about the inputs to those models I think we're all and this is very true for Vanguard we we we don't allow our data to be used in training of the models Right so our data stays within kind of uh you know our environment our ecosystem and so then when we want to use that data to uh get a capability depending on how we're able to create a
            • 61:00 - 61:30 technology solution whether it's a buy or build right within an environment then we use our data within our environment so our our data kind of never leaves kind of our four walls so to speak and each of us are each of the institutions are using our data um and what we know about our clients and what we believe our market outlook to be and our economic research that uh each of our firms are doing to formulate what it is we're optimizing for to create value
            • 61:30 - 62:00 for our clients right and I think that that is a that is a healthy market right and so I think you know to Jeff's point I don't think any of us have a like a value in in kind of doing anything otherwise because otherwise it would you know we wouldn't create any value and and then that's that's not what that's not why we're here you know we're trying to the Vanguard we're trying to take a stand for all investors and treat them fairly and give them the best chance for investment success um and so that's one
            • 62:00 - 62:30 of the ways that we do it and I think if you know when you take the inputs that you've heard from from from these three folks um what you're hearing um is the evolution of existing risk frameworks um and all of these organizations um have existing risk mitigation frameworks that work really well and I think it's important to evolve before we invent and I think um what you've heard them articulate is exactly what we would want an organization to do is to take their existing approach to mitigating risk um
            • 62:30 - 63:00 and to making sure that we have safe and sound uh markets but also investor protection and evolving that to account for the advancement in tools and to keep the clean sweep going down the road um I would say you probably also saw a peak of that regulatory concern when there were you know everyone assumed uh what it would take to deliver and develop LLMs and certainly with DeepSeek and other developments in the marketplace some of those things have been questioned and so I think as we watch the rapid evolution
            • 63:00 - 63:30 see where Agentic AI takes us um again many of those concerns coupled with you know industry seeking alpha um you know amilarate those those concerns significantly thanks i know um Jeff you mentioned before that existing risk management frameworks for models are insufficient for some of the newer models LLMs and Genai Foundation are there um riskmanagement
            • 63:30 - 64:00 uh frameworks that are applicable that um regulators should be looking at whether it's NIST ISO or any others that we can draw from yeah I mean it's it's really all you're doing instead of funneling to the model risk management function you're really sending it to the non-financial risk area i mean we're not new to building technology and I my only point is just I think from a regulatory perspective just acknowledging that
            • 64:00 - 64:30 there's two paths here um one is not better they're just different paths and that you know firms need to kind of reallocate because it has the word model in it everybody just said okay MRM needs to be involved well there's nothing for them to look at so I think it's just an acknowledgement that that that path is not the right path and that they really need to go down to what I'm going to say the traditional riskbased approach evaluate the risks make sure you've got the right mitigants in place and that
            • 64:30 - 65:00 there's a second line function to validate you've done that yeah I would add so I mean we see across the industry a lot of different frameworks being used we've looked at a lot of them um you know obviously the NIST framework and various other frameworks um one in particular the open sourced uh Fenos AI risk management framework uh that's um fintech open source forum so NIST you know it's um it's generally you know a bit broad uh it provides a really structured approach
            • 65:00 - 65:30 i think uh you know when we think about how we're looking at um applying our principles it's it's quite well aligned um the Fenos framework is actually kind of you know built for um financial institutions by financial institutions um who are looking for guidance around um AI AI governance principles so you know would recommend taking a look uh into that framework um it also provides specific recommendations around how to mitigate uh those risks um you know of
            • 65:30 - 66:00 course there's ISO that covers various other aspects and then there's you know on the other end of the spectrum the uh EI EU AI act which is you know legally mandated and uh you know we're we're keeping a pulse on that one as well yeah iso 42,01 is the first standard uh for AI management systems uh um you know as we think about at AWS you know it's important for us I think to lean on global standards what we don't want as
            • 66:00 - 66:30 organizations navigating a patchwork of of different regulations around the world and uh if we can lean into standards that exist today standards bodies like ISO uh as Jonah mentioned I think that that's important for us um as an overall industry uh to be able to apply that across the industry one of the concerns I think would be to make sure that um folks who have moved first so for instance the EU AI act I think has already shown uh that the attempt to go in a prescriptive and um
            • 66:30 - 67:00 preemptive nature it's very difficult to pull off you've already seen the EU um remove and withdraw the AI liability act and so I think that as you think about standards trying to make sure that just first and loudest uh isn't necessarily what sets that standard um would be important for the the global discussion so I think that feeds into the next question really well is um where are appropriate entry points for market supervision market supervisors like the
            • 67:00 - 67:30 SEC how um should we be thinking about the supervision of risk risk management at firms um we want to incentivize good practices but not stifle innovation especially with some of these um applications we talked about where LLM could be a judge for the uh performance of another LLM or you could use um potentially some of these newer technologies in risk management and cyber cyber security so I just want to
            • 67:30 - 68:00 how did regulators walk that line keep that balance if you if you guys have any ideas for us well maybe I'll give some general thoughts um before the colleagues step in but certainly forums like today you know the message from all of the commissioners this morning of that engagement I think requesting information you know we saw US Treasuries RFI earlier that's certainly a great approach for a time like this where things are evolving really quickly so we think that you know an ordered approach with a lot of engagement
            • 68:00 - 68:30 grounding it in science grounding it in uh a risk based and activity based um you know framework uh would be very helpful at this time i would just add one other thing I and this is true for us as well we just need to do more education right it's it's if you're not if you don't understand this technology works you're not going to be an effective regulator you're not going to be effective practitioner you're not going to be proctor oversight i I can't stress enough like getting your people to really understand what the technology
            • 68:30 - 69:00 does and what it doesn't do what the real risks are I think would go a long way in terms of your effectiveness yeah the the one thing I'll add there is um I would caution against just establishing kind of new legal frameworks just because because it's you know you want to you know to to to to the previous point you want to build on existing legal frameworks that are already in play that already apply and
            • 69:00 - 69:30 then find the areas where um you know where there may be gaps right and and and and if you could do that in a way that allows else that doesn't stifle innovation but allows us to reduce the risk at the same time then I feel like that's a that's that's a that's a great starting point the other the other thing I would add is you also want to be technology agnostic right meaning like it's it's you know you you're not going to call out one technology versus another you're you're trying to give us um because the space is moving so fast
            • 69:30 - 70:00 so the moment that you do that in six months it'll be it'll be outdated right so thinking through uh uh kind of an approach that allows you to build a regulatory framework that takes into account of what's already in place so that we don't kind of get the ambiguity of well I got to I got to suffer this one or that one and and then they compete with each other um which makes it extremely difficult so that's what I would add yeah um I think I would echo uh the points made uh by my colleagues
            • 70:00 - 70:30 thus far and I think a balanced principles-based approach is important obviously you want to maintain uh market integrity you want to maintain data protection you want to maintain resiliency and stability and so on but you also don't want to styy innovation um so you know I think looking at the existing frameworks um that are already out there um the relative adoption of frameworks relative feedback around them uh what firms are are using what frameworks and how well is it working uh
            • 70:30 - 71:00 for them you know obviously engagement is hugely important and kind of understanding how it's working within firms I mean I think a lot of firms are in fact self-regulating and and trying to do their absolute best in driving responsible AI inov innovation um I think you know increasing attention on the truly high-risk use cases versus the ones that are just purely about operational efficiency uh is important i think focusing on outcomes is uh is important and to Ryan's point not
            • 71:00 - 71:30 necessarily prescrib prescribing you know a specific technical approach um I think it's important to um encourage uh firms to be transparent around how and why they're using AI um you know stakeholders have a right to know and regulators have a right to know um and I think just ultimately um ensuring that responsibility rests with humans and non AI systems yeah I would echo all my colleagues and what they what they've said here you know in the US we operate
            • 71:30 - 72:00 in a principlesbased rule system and in a principlesbased rule system innovation can precede regulation and you heard Conan give the example of a prescriptive based rule system where regulation has to precede innovation and the challenges that exist there and so I believe that leaning into existing frameworks and existing um tools uh and and governance policies for risk management is appropriate and evolving those over time uh is even more important and um I think on top of that is the importance of dialogue and you heard everybody talk
            • 72:00 - 72:30 about the important of education and open dialogue because the pace of change with this technology is unprecedented in our lifetime and it's only going to continue to get faster and the only way that we can keep pace with it is continue to have these types of forms and this type of dialogue so again thank you to the commission for having us here today so we have about two minutes i'm going to do a lightning round the perfall question so I think um as folks were thinking about governance and risk
            • 72:30 - 73:00 management within firms um I'm sure you know I I refer to it as the Kool-Aid man moment when JAG hit the scene and I'm sure that's when everybody in risk management radar went up and you you thought okay we really got to look at this there are some potential issues here that we haven't faced before what do you think the next thing coming down the pike is that will give you that same kind of um need for attention is it agentic AI um is it something else is it
            • 73:00 - 73:30 quantum so agents y and and just just to highlight and I'm sure this the group will talk about it but the everything that we're talking about here is our own stuff that's combined to the four walls that is secure it's in the AWS infrastructure we have people with money in five years every one of you is going to have an agent and they're going to talk to each other and nobody's and and who's going to define the protocol for those agents there's some emerging concepts um who's going to govern the
            • 73:30 - 74:00 proper use i mean the the ability for fraud prompt injections you can have a 17-year-old kid prompt injecting stuff so we're talking I can reroute your thing from the DTC to my to my credit card i mean the the the the complexity of that environment is I think something that as a regulator you should be thinking really hard about and and I don't know the answer to that question but I think it's going to be a really important one yep agree i think um even scarier is the convergence of agents
            • 74:00 - 74:30 quantum and crypto right so imagine you have an agent that is able to use u you know biometric markers to um you know essentially mimic a human and they're able to then get into multiple systems um you know they somehow know passwords potentially they have access to quantum technology two three five 10 years down the line um that's truly scary and how do you um how do you incentivize an AI agent they're not going to be able to necessarily open a bank account they're going to be paid in crypto so I mean
            • 74:30 - 75:00 we're talking about a future that is quite um daunting yeah I couldn't I couldn't agree more the one thing I'll add I was going to say agentic uh a AI agentic um capabilities but also deep fakes um I think um the AI AI has the ability to sample somebody's voice use that voice sample video create a video uh from you know with your likeness and so I think that that uh that
            • 75:00 - 75:30 capability partnered with an agentic kind of architecture that that kind of has access to certain things uh you know your bank account your personal information that type of thing um I think that is uh something that that that's going to be a real risk and and we we're seeing new threat vectors that are using AI creatively to um you know to to to do bad things and so um so we have to we have to stay on the front foot of it so yeah
            • 75:30 - 76:00 um I'm going to go a different direction i'm not going to say agentic i'm going to say trust the evolution of trust what it means to trust uh early early in the adoption of generative AI I had a customer in the insurance space uh tell me Scott we're really concerned about our brand when it comes to uh generative AI and this was early in the war in Ukraine um and there were videos being produced of of things that look like they were um engagements they were fake
            • 76:00 - 76:30 they were completely fake they were just manufactured uh by AI and this gentleman said "Listen we have a 150 year old brand and we've built that brand over you know several centuries now and it only takes a minute for us to lose trust and we'll never gain that back." And so I think what's going to be very important for all of us going forward is um what does trust look like how can we ensure that the things that we're seeing the things that we're being told um are truth ground truth as mentioned earlier
            • 76:30 - 77:00 and I think that's going to be a very interesting thing for us all all to tackle uh in the years to come yep that u you know point about maintaining trust and I'll I'll tie them together maybe in a world where you know agents are reimagining you know processes and there's a lot of process engineering and maybe digital identity and um some of the you know tracking mechanisms haven't necessarily been well thought out and developed uh there's a possibility to
            • 77:00 - 77:30 lose that trust and and that would be um a big problem and then quantum preparedness a lot of folks are are focused on sort of the downside of quantum developments i think it's actually you know like many technologies a balanced story um but if industry isn't ready uh for that moment that will come before long um and I think it will be uh certainly NIST and others are developing the tools so that industry can be um but I'd put that on the on the list as well quantum preparedness thank
            • 77:30 - 78:00 you well whether it's agentic systems deep fakes or the uh tech technological uh trinity that we talked about um we will be excited to have you here next time to talk about how risk management have factored those uh developments in and with that let me um ask you all to join me in thanking the panel [Applause]