AI Language Models Under Cyber Lens in Hong Kong
Hong Kong Regulators Tighten the Screws on AI Security Risks in Finance
Last updated:

Edited By
Mackenzie Ferguson
AI Tools Researcher & Implementation Consultant
The Hong Kong Securities and Futures Commission (SFC) has rolled out new guidelines to mitigate cybersecurity risks associated with generative AI language models. These measures, targeting licensed corporations, emphasize threat awareness, robust policies, and risk management practices.
Introduction to the SFC's New Guidance
In November 2024, the Hong Kong Securities and Futures Commission (SFC) introduced new guidance focusing on the management of cybersecurity risks associated with generative artificial intelligence language models (AI LMs) within licensed corporations (LCs). This guidance mandates that LCs enhance their cybersecurity protocols to address emerging threats by implementing robust policies and processes. The measures highlighted by the SFC include conducting adversarial testing on AI LMs and data sources, ensuring the encryption of non-public data, and addressing vulnerabilities that may arise from browser extensions and the handling of sensitive input.
The Circular also emphasizes the importance of managing cybersecurity risks stemming from third-party AI LM providers. Licensed corporations are required to perform extensive due diligence, continuous monitoring, and evaluate indemnities to ensure a clear allocation of responsibilities related to cyber risks. Furthermore, LCs should assess supply chain vulnerabilities and risks related to business continuity, supporting a comprehensive approach to cybersecurity that includes building AI governance frameworks and conducting gap assessments to identify any deficiencies.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














The SFC Circular clarifies its applicability to SFC-licensed corporations employing AI LMs, particularly in high-risk functions such as investment advice or research. These applications necessitate enhanced mitigation efforts to secure sensitive information. The Circular outlines that the necessary cybersecurity measures include staying abreast of current threats, establishing effective controls, conducting adversarial testing, encrypting sensitive information, and maintaining client data security throughout the entire AI lifecycle.
Detailed directives within the Circular address how licensed corporations should manage third-party AI LM providers. These entail performing thorough due diligence and maintaining ongoing monitoring processes. Assessments of indemnities and proper allocation of risks are crucial, alongside evaluations of vulnerabilities in the supply chain and potential continuity risks with third-party vendors.
To adhere to the Circular’s guidelines, licensed corporations must carry out practical steps such as conducting assessments to identify compliance gaps, ensuring adequate budgets and resources are allocated for cybersecurity measures, forming AI governance committees enriched with cybersecurity expertise, and developing standardized diligence questions for vendors. Additionally, monitoring the use of AI to pinpoint new risks is key in maintaining a secure operational environment.
Scope and Target of the SFC Circular
The Hong Kong Securities and Futures Commission (SFC) has introduced a comprehensive circular aimed at addressing cybersecurity risks associated with the use of generative artificial intelligence language models (AI LMs) by licensed corporations (LCs) in Hong Kong. This circular specifically targets AI applications used in high-risk functions such as investment advice or research, which necessitate stringent cybersecurity measures.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














The scope of the SFC Circular encompasses all SFC-licensed corporations utilizing AI LMs, especially those integrated into functions that are considered high-risk due to their potential impact on financial stability and client security. These corporations are required to enhance their cybersecurity protocols to mitigate potential threats posed by the application of AI technologies.
The Circular mandates that LCs remain vigilant and proactive in the face of evolving cybersecurity threats. This involves maintaining up-to-date knowledge of emerging threats, implementing robust controls, and conducting adversarial testing on AI LMs and their data sources. Moreover, the Circular emphasizes the importance of encrypting non-public data and managing the risks associated with browser extensions and sensitive input handling.
Essential Cybersecurity Measures for Licensed Corporations
The digital transformation wave sweeping across various sectors presents a multitude of opportunities, yet it also brings to the forefront significant risks, particularly for licensed corporations in the financial sector relying on artificial intelligence. The recent Hong Kong Securities and Futures Commission (SFC) Circular is a crucial wake-up call for these entities, urging them to enhance their cybersecurity strategies substantially. With generative AI language models becoming integral components in functions such as investment advisory services, the potential cyber threats loom larger than ever.
To respond effectively, the SFC mandates that licensed corporations must not only stay abreast of the evolving threat landscape but also implement actionable cybersecurity controls. This entails conducting adversarial testing to anticipate potential vulnerabilities in AI systems, enforcing robust data encryption practices, and establishing stringent measures to handle sensitive information securely. Moreover, the guidance emphasizes the need for careful management of technological risks arising from browser extensions and similar exploitable touchpoints.
A significant aspect of the SFC guidance is its directive on managing third-party AI LM providers, which is especially pertinent given the complexities involved in AI supply chains. Licensed corporations must conduct thorough due diligence and maintain continuous oversight when engaging with these external vendors. This includes evaluating their cybersecurity readiness, demanding appropriate indemnities, and ensuring that there is a clearly defined distribution of cyber risk duties among all stakeholders involved. Additionally, identifying and mitigating supply chain vulnerabilities stands crucial to avoiding disruptions and maintaining business continuity.
Practically, the SFC's guidelines recommend that firms undertake gap assessments to pinpoint areas needing improvement in their current cybersecurity frameworks. Allocating appropriate budgeting for these initiatives is equally vital to support ongoing compliance efforts. Furthermore, establishing AI governance structures with designated committees or roles specializing in cybersecurity can provide oversight and strategic direction. The development of standard due diligence protocols for vendors and constant assessment of AI-driven service vulnerabilities can fortify a company's defensive posture against emerging threats.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Managing Risks from Third-Party AI Providers
In today's rapidly evolving technological landscape, managing risks associated with third-party AI providers has become crucial for organizations, especially within the financial sector. With increasing reliance on artificial intelligence for various operational functions, organizations are more exposed to cybersecurity threats and operational vulnerabilities stemming from their AI partnerships.
A recent Circular by the Hong Kong Securities and Futures Commission (SFC) sheds light on these concerns by providing detailed guidance on managing cybersecurity risks associated with generative AI language models. Licensed corporations (LCs) in Hong Kong are mandated to stay abreast of emerging cybersecurity threats, establish robust policies, and conduct regular adversarial testing on AI systems to mitigate these risks. This includes encrypting non-public data and addressing potential vulnerabilities inherent in third-party services such as browser extensions and data handling protocols.
The guidance emphasizes the importance of due diligence, ongoing monitoring, and the assessment of indemnities related to third-party AI providers. This means LCs must ensure a clear delineation of cyber risk responsibilities while evaluating vulnerabilities within their supply chains and assessing business continuity risks. As part of these risk management efforts, conducting gap assessments to identify deficiencies and allocating appropriate resources for compliance are key steps.
Moreover, the SFC's guidelines point to the necessity for all LCs to develop AI governance frameworks aimed at overseeing the cybersecurity risks associated with AI deployments. Such frameworks not only facilitate compliance but also enhance the capability of organizations to detect and respond to threats quickly, improving overall resilience against AI-related risks.
Industry observers have noted that this regulatory guidance could have significant implications. Economically, it might increase operational costs for firms required to enhance cybersecurity measures. However, it could also stimulate innovation as firms strive to integrate cutting-edge cybersecurity solutions into their operations to comply with the new requirements. Socially, these measures aim to boost consumer trust in financial AI applications, ensuring that user data is secure and responsibly managed.
Politically, the proactive stance by Hong Kong's regulatory bodies in managing AI risks might set a precedent, encouraging other jurisdictions to adopt similar measures. This could promote global standardization in AI governance, enhancing international cooperation and positioning Hong Kong as a leader in AI regulation. The focus on accountability and human oversight further underscores the need for a balanced approach that weighs technological advancement against potential risks.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Practical Steps for Adhering to Guidelines
The Hong Kong Securities and Futures Commission (SFC) has introduced comprehensive measures to fortify cybersecurity protocols amongst licensed corporations (LCs) dealing with generative AI language models (AI LMs). Practical adherence to these guidelines not only demands staying abreast of evolving cyber threats but also necessitates the establishment of resilient policies and controls. Conducting adversarial testing on AI models, encrypting sensitive data, and addressing vulnerabilities, particularly from browser extensions and delicate input handling, are pivotal actions for LCs. This section delineates methodical steps for licensed corporations to conform to the SFC's meticulous standards and effectively bolster their cybersecurity framework.
To begin with, LCs are advised to undertake thorough gap assessments to identify and rectify deficiencies in their current cybersecurity measures. Once these gaps are identified, setting aside adequate budget and resources for compliance becomes crucial. Ensuring that the financial and human capital is allocated wisely can significantly enhance an organization's ability to meet the SFC's requirements. Moreover, LCs should consider forming dedicated AI governance committees comprising members with cybersecurity expertise to oversee and fortify the risk management process continually.
Another critical step is advancing due diligence and perpetual monitoring of third-party AI LM providers. Given the intricate nature of AI applications and their integration with third-party services, LCs must assess indemnities and scrutinize the allocation of cybersecurity risks meticulously. The evaluation of supply chain vulnerabilities and business continuity risks ensures that potential weaknesses do not compromise the overall system. By doing so, firms can preemptively address any shortcomings and mitigate the risk of potential disruptions.
A fundamental pillar in adhering to the Circular’s guidelines involves establishing standard diligence protocols for vendor evaluation. Crafting detailed questions aimed at understanding and assessing vendor cybersecurity practices should be prioritized. This initiative can prove instrumental in uncovering and mitigating any latent risks posed by external service providers. In parallel, keeping abreast of AI use cases and continuously monitoring them for emerging risks forms an essential practice to safeguard consumer data and uphold regulatory compliance.
Lastly, fostering transparency and human oversight in AI operations is emphasized to ensure accountability remains with LCs rather than the AI technologies themselves. This entails robust verification processes, particularly for high-risk applications such as those involving investment advice. The SFC's guidance accentuates that despite the technological prowess of AI LMs, the ultimate accountability for cybersecurity lies with human operators. Consequently, investing in the necessary workforce training and cultivating skills reflective of evolving AI governance and cybersecurity demands are pivotal to adapting effectively to these guidelines.
Comparative Global Perspectives
The landscape of global cybersecurity regulations has been marked by increasing attention to the risks posed by AI, with new guidance from various jurisdictions seeking to mitigate these challenges. In November 2024, the Hong Kong Securities and Futures Commission (SFC) issued a pivotal circular that underscores the need for robust management of cybersecurity risks linked to the use of generative artificial intelligence language models (AI LMs) by licensed corporations (LCs). This move aligns with global trends, as financial regulators like the New York State Department of Financial Services also emphasize the importance of updated risk assessments and incident response strategies tailored to AI-induced threats.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Hong Kong's recent initiative sets a precedent for managing AI-related cybersecurity risks, offering a detailed framework that mandates LCs to fortify their defenses against evolving threats. The SFC's circular highlights a comprehensive approach, calling for adversarial testing, stringent encryption practices, and close monitoring of third-party AI LM providers. This holistic strategy aims to secure sensitive data and ensure operational resilience within Hong Kong's financial sector. The circular also stresses the balance of leveraging AI's advantages in enhancing threat detection and response while safeguarding against potential abuses.
Parallel regulatory advancements in New York further illustrate a shared global mission to enhance cybersecurity measures amid AI's integration into finance. The New York DFS's guidelines recommend multifactor authentication and comprehensive incident response plans, underscoring the new challenges posed by technologies such as deepfakes. Together, these efforts reflect a growing international consensus on the necessity of proactive and adaptive cybersecurity policies that address the multifaceted risks of AI.
Reports such as the November 2024 Capgemini study reveal that nearly all organizations employing generative AI have encountered cybersecurity issues, signifying a critical need for improved security frameworks. The Hong Kong SFC's emphasis on identifying vulnerabilities and ensuring business continuity resonates with findings that security lapses can lead to significant financial setbacks. As AI continues to pervade various sectors, the implementation of these guidelines could serve as a benchmark for others to replicate.
The international dialogue around AI and cybersecurity not only highlights shared risks but also fosters collaboration among nations in standardizing AI governance. Hong Kong's proactive measures may serve as a model, promoting dialogue on global AI ethics and data protection protocols. This cross-border approach is imperative for developing cohesive strategies that transcend individual markets, ultimately fostering a more secure digital global environment.
Expert Opinions on the SFC's Guidance
Experts and industry leaders have expressed diverse viewpoints on the Hong Kong Securities and Futures Commission's (SFC) recent guidance regarding the management of cybersecurity risks associated with generative artificial intelligence language models (AI LMs). The guidance has been praised for its thoroughness and practicality, with an emphasis on a risk-based approach. High-risk AI applications, such as those used for investment advice, are seen as requiring stringent mitigation measures.
Debevoise & Plimpton LLP commended the SFC's circular for its clarity and detailed directives aimed at managing AI-related risks. They emphasize that the implementation of security measures should be proportionate to the AI application's risk level and importance. The law firm stressed the need for robust cybersecurity measures, including thorough network protections and rigorous evaluations of third-party providers. They also recommend proactive steps such as conducting gap assessments and designing standard diligence questions for evaluating vendors.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














MinterEllison focused on the SFC's prioritization of human oversight and accountability within licensed corporations. They underline that despite the technological advancements of AI tools, accountability should rest with the corporations using them, not the AI itself. The firm highlighted the necessity for human verification, especially in scenarios where AI is deployed for investment advice. The SFC's guidance supports rigorous validation and client disclosure concerning the AI applications in use.
BABL AI addressed potential risks like hallucinations and biases that could arise from using AI language models, highlighting the four key principles set by the SFC. These principles include strict management oversight and a structured risk management approach for AI models. The discussion around these aspects underscores the importance of maintaining ethical standards and transparency in AI deployment in financial settings.
Economic, Social, and Political Implications
The recent circular issued by Hong Kong's Securities and Futures Commission (SFC) addresses the growing cybersecurity concerns associated with generative artificial intelligence language models (AI LMs) used by licensed corporations (LCs). It mandates LCs to stay abreast of emerging cybersecurity threats and enforce robust policies to mitigate them. Key protocols include adversarial testing on AI models, ensuring encryption of non-public data, and vigilance against vulnerabilities inherent in browser extensions and sensitive data inputs.
Moreover, the SFC guidelines emphasize a thorough risk management approach towards third-party AI LM providers. Licensed corporations are urged to undertake comprehensive due diligence, continuous scrutiny of these providers, and ensure clear allocation of cyber risk responsibilities. Evaluating supply chain vulnerabilities and continuity risks also constitute essential components of this directive. This helps LCs maintain a fortified cyber environment against potential external threats.
Financial institutions must also consider the practical aspects of adhering to these guidelines. Key actions include conducting gap assessments to pinpoint security deficiencies, allocating adequate budgets for compliance, and setting up AI governance structures. These structures should involve experts focusing on cybersecurity to monitor AI use cases, ensuring robust protection of sensitive financial data.
The economic implications of the SFC's guidance could include increased operational costs for LCs as they strive to comply with these comprehensive cybersecurity measures. However, by fostering a robust cybersecurity culture, these institutions could enhance their competitiveness and ability to innovate by adopting cutting-edge technologies. Implementing these directives may ultimately drive trust among consumers, assuring them of the safety of their personal and financial information when dealt with through AI systems.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Socially, this comprehensive guidance could bolster consumer confidence in the use of AI within financial services by protecting sensitive data through rigorous oversight and management. The requirement for human oversight alongside AI applications may lead to a shift in workforce requirements, demanding greater emphasis on AI governance, policy formulation, and cybersecurity expertise to ensure seamless integration.
Politically, Hong Kong's decisive actions could act as a blueprint for global regulatory frameworks, influencing other nations to enact similar measures for AI oversight and governance. This could not only elevate Hong Kong's standing as a thought leader in financial regulation but also stimulate more robust international collaboration on AI standards, thus potentially attracting more investments and bolstering its status as an eminent financial center in the Asia-Pacific.
Thus, the SFC's guidance on managing AI-related risks extends beyond local implications, potentially sparking a broader dialogue about AI ethics and the fine balance between leveraging technological advances and managing associated risks. This conversation might shape future policy discussions both within Hong Kong and on the international front, underlining the need for a coordinated, holistic approach to AI governance.
Conclusion: Future Directions for AI Cybersecurity
The conclusion section underscores the ongoing importance of robust cybersecurity measures as artificial intelligence continues to integrate into various facets of society and business. As generative AI becomes more prevalent, the potential for cyber threats increases, necessitating measures that are both preventative and responsive to new vulnerabilities.
The Hong Kong SFC's guidance on cybersecurity highlights a growing recognition of the need for regulatory oversight specific to AI technologies. Future directions in AI cybersecurity will likely involve a combination of regulation, industry best practices, and technological innovation to keep pace with emerging threats.
One potential direction for AI cybersecurity is the development of real-time threat detection systems powered by AI itself. These systems could identify and neutralize threats as they occur, leveraging machine learning to anticipate attacks before they can inflict damage.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Another avenue is international cooperation on AI cybersecurity standards, which would facilitate a more unified global approach to threat management. This could include shared research initiatives and compliance frameworks that transcend national borders, reflecting the inherently international nature of cyber threats.
Furthermore, there is a growing need for specialized workforce development to address AI-specific cybersecurity challenges. As AI governance frameworks become more sophisticated, the demand for professionals skilled in both AI technologies and cybersecurity will rise.
Overall, the future of AI in cybersecurity presents both challenges and opportunities. While there are significant risks associated with AI technologies, there is also the potential for AI to dramatically improve the effectiveness of cybersecurity strategies, leading to safer digital environments globally.