Guiding the Future of AI
EU Takes Lead with Draft Code of Practice for General-Purpose AI: Here’s What You Need to Know
Last updated:

Edited By
Mackenzie Ferguson
AI Tools Researcher & Implementation Consultant
The European Union unveils a draft for the General-Purpose AI Code of Practice aimed at aligning AI models with the AI Act. Released on November 14, 2024, this Code isn't legally binding but ensures adherence, offering a 'presumption of conformity' with the AI Act. With a focus on transparency, copyright, and systemic risk mitigation, it's a big step towards regulating AI. Get ready for the final version expected by August 2, 2025, and see how it might set a global precedent.
Introduction to the EU AI Act and General-Purpose AI Code of Practice
The European Union's approach to artificial intelligence regulation is exemplified in the EU AI Act and the associated General-Purpose AI Code of Practice. Announced as a pioneering initiative, these measures are designed to manage the fast-evolving domain of AI within Europe, offering a structured framework for AI system developers, particularly those creating versatile, general-purpose AI models. As the EU AI Office released the first draft of the Code on November 14, 2024, stakeholders across the technology sphere are analyzing its potential impacts.
The General-Purpose AI Code of Practice is aimed at guiding providers of general-purpose AI models. Although not intended as a legally binding document, adherence to the Code provides a "presumption of conformity" with the overarching AI Act. This essentially acts as a safeguard, aiding companies in demonstrating compliance with regulatory expectations centered around transparency, copyright adherence, and mitigating systemic risks inherent in AI technologies.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














The draft of the Code sheds light on several key aspects necessary for aligning with the AI Act. Among them, transparency is perhaps the most emphasized, demanding that AI systems operate with clear and understandable mechanisms for users and overseers. Additionally, copyright compliance requires adherence to intellectual property rights, reflecting a significant concern for content creators and distributors in the digital age.
Systemic risk mitigation remains a critical component outlined within the Code, addressing pressing concerns such as cybersecurity threats and the potential for AI misuse. Examples of these systemic risks include the possibilities of cybercrimes and the consequential loss of control over AI systems, as well as the misapplication of AI leading to societal issues like discrimination and large-scale manipulative practices.
Although the EU's Code of Practice is still in draft form, its anticipated implementation date is set for August 2, 2025. Leading up to this date, the EU AI Office will seek input and feedback from stakeholders, ensuring that the finalized Code effectively meets the challenges posed by rapidly advancing AI technologies. Observers note that the Code is likely to influence global AI standards, setting the precedent for instances where AI innovation intersects with regulatory compliance.
The Importance of the Draft Code Released on November 14, 2024
The release of the draft General-Purpose AI Code of Practice on November 14, 2024, signifies a pivotal moment in the ongoing evolution of artificial intelligence governance in the European Union. While not legally binding, this draft establishes crucial guidelines aimed at fostering transparency, ensuring copyright compliance, and addressing systemic risks inherent in AI models. By promoting a 'presumption of conformity' with the AI Act, the draft code provides a framework for AI providers to align their practices with regulatory expectations, potentially influencing AI governance globally.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














The General-Purpose AI Code of Practice represents a significant stride in the EU's comprehensive approach to AI regulation. With its introduction, the draft seeks to balance the need for clear compliance measures with the flexibility necessary to accommodate future technological advancements. Experts from Akin Gump and other organizations emphasize the ambition behind the Code to create a 'future-proof' framework that aligns with international standards while addressing the unique challenges posed by general-purpose AI systems.
Among the key areas addressed by the draft code are transparency, copyright compliance, and the mitigation of systemic risks, such as cybercrimes and discrimination. These elements underscore the EU's commitment to establishing a safe and ethically sound AI landscape. Moreover, the draft incorporates a 'comply or explain' mechanism, allowing providers some latitude in meeting its requirements while maintaining the overarching goal of safe, trustworthy AI deployment.
Given the current trajectory of AI legislation, the EU's draft code, despite being non-binding, is expected to significantly shape the development and implementation of AI systems across the region. The final version, anticipated to come into effect on August 2, 2025, may set a precedent for global AI governance, influencing regulations beyond Europe's borders and potentially leading to increased international cooperation in formulating AI standards and guidelines.
Legal Implications of the Code: Binding Nature and Conformity
The EU's AI Act represents a pivotal regulation that seeks to streamline the development, deployment, and operationalization of AI technologies within its jurisdiction. At its core, the Act assigns AI systems to various risk categories, each accompanied by stipulated compliance and operational guidelines. Within this evolving regulatory landscape, the drafting of a General-Purpose AI Code of Practice emerges as a noteworthy development. Released on November 14, 2024, this draft serves as a pivotal reference for developers and providers of general-purpose AI models, steering them towards best practices in transparency, copyright adherence, and systemic risk management.
Although the code itself does not possess a legally binding status, adherence to its guidelines cultivates a 'presumption of conformity' with the overarching EU AI Act. This, in turn, offers providers a shield from rigorous scrutiny, as compliance with the draft creates an implicit assurance of commitment to ethical and safe AI practices. An ultimate goal of the draft code is to bridge any gaps between established AI Act mandates and practical implementation challenges faced by AI practitioners.
The projected timeline for the impact of the General-Purpose AI Code of Practice is significant. While the final version is on track for a planned roll-out in August 2025, its ramifications are already causing ripples across international regulatory landscapes. Of particular note is the code's potential to shape global standards. Its focus extends beyond regional compliance, bearing implications for global AI governance and operational frameworks.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














A series of noteworthy global events underscores the urgency and importance of such a code. These include President Biden's issuance of an executive order on AI emphasizing safety and privacy, the UK's AI Safety Summit culminating in the Bletchley Declaration, China's implementation of generative AI regulations, and the G7 Hiroshima AI Process advocating for universally accepted standards. Each event reflects a stepping stone towards harmonizing AI regulatory efforts globally.
Expert opinions shed light on the dynamic discourse surrounding the code. Legal practitioners from Akin Gump Strauss Hauer & Feld LLP celebrate the draft's potential to create a 'future-proof' framework with global implications. However, some industry voices like those from TechNet raise concerns over its vagueness and possible regulatory overreach, suggesting a need for clearer compliance guidelines.
Understanding General-Purpose AI Models
General-purpose AI models have emerged as versatile tools capable of performing a range of tasks without being tailored for a specific purpose. These models, such as OpenAI's GPT and Google's BERT, can tackle an array of problems from language translation to customer service automation, making them invaluable in many industries. However, their adaptability also introduces complexities in regulation and ethical considerations.
The European Union's recent initiatives, such as the Artificial Intelligence Act, signify a proactive approach to regulating these models. The Act categorizes AI systems based on risk, imposing varying obligations accordingly. In parallel, the General-Purpose AI Code of Practice, although not enforceable by law, offers a blueprint for aligning AI model practices with EU standards, especially in terms of transparency, copyright adherence, and mitigating systemic risks.
Transparency in the deployment and functionality of general-purpose AI models forms a cornerstone of the proposed Code. By mandating clear disclosure of AI capabilities and limitations, the Code seeks to foster trust and accountability. Additionally, it emphasizes compliance with copyright laws, ensuring that AI systems do not infringe on intellectual property rights, an area fraught with challenges as AI learns and generates content.
Systemic risks posed by general-purpose AI models, such as potential cybercrimes and the loss of control over AI-generated outputs, are critical focus areas under the EU's framework. The Code advocates for robust risk assessment methodologies and the development of preventive strategies to mitigate these risks, safeguarding both data security and public interest.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














While the Code aims for widespread adoption, it faces skepticism from stakeholders concerned about innovation constraints and regulatory burdens, particularly for U.S.-based AI companies. Critics argue that the draft, despite its good intentions, may stifle creativity and impose unnecessary costs, urging for more clarity and global alignment in compliance requirements to ease cross-border AI deployments.
Key Areas Covered by the Code: Transparency, Copyright, and Risk Mitigation
The General-Purpose AI (GPAI) Code of Practice, as guided by the European AI Office under the newly implemented EU AI Act, aims to enforce a streamlined framework focusing on transparency, copyright adherence, and risk management. It serves as a non-binding guideline yet provides a presumption of alignment with the AI Act. Released in a draft format on November 14, 2024, this code is designed to guide providers of general-purpose AI models, supporting compliance with overarching regulatory standards. Though not legally binding itself, adherence to the principles within the Code acts as a safeguard against potential AI Act violations.
These models, adaptable across various tasks, represent a shift from specialized AI functions to broader utility applications. The Code strives to ensure these models adhere to strict transparency measures, outlining obligations for clear communication about AI capabilities and limitations to foster trust and accountability. It also stresses the importance of copyright compliance, navigating complexities around intellectual property in AI-generated content. Additionally, systemic risk mitigation is a core feature, aiming to prevent scenarios such as cybercrimes, loss of AI control, or discriminatory practices.
The Code's final implementation is anticipated for August 2, 2025, becoming a strategic milestone for businesses and stakeholders within the EU's artificial intelligence ecosystem. This timeline provides a window for feedback and adaptations to ensure practical applicability of these requirements. The Code is also a product of broad international discourse, parallel to significant global initiatives such as the U.S. Executive Order on AI Standards and China's AI regulations.
Relevant to the deployment of General-Purpose AI models, the Code of Practice is a part of the EU's broader strategy to lead in AI governance, potentially setting a standard that influences international policy directions. The draft underscores a forward-looking, flexible, yet comprehensive regulatory strategy that addresses current AI capabilities while anticipating future technological evolutions. Ultimately, this initiative aligns with ongoing global conversations aiming to balance innovation with safety and ethical considerations.
Examples of Systemic Risks in AI
Systemic risks in AI refer to potential widespread disruptions or negative impacts that arise from the deployment and integration of artificial intelligence technologies in various aspects of society. These risks can stem from AI systems that are interconnected, influential, or capable of large-scale operations, often leading to profound consequences if not properly managed.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














One significant systemic risk associated with AI is cybersecurity threats. AI systems, especially those that are widely used or relied upon, become attractive targets for cybercriminals. If compromised, these systems can lead to massive data breaches, misinformation spread, or crippling of critical infrastructure.
Another risk is the loss of human control over AI systems. As AI becomes more autonomous, the ability of humans to intervene decreases, potentially leading to unpredictable outcomes. This risk is particularly concerning in high-stakes environments such as healthcare, transportation, and national security.
AI also poses risks of large-scale manipulation and influence, as machine learning algorithms can be used to tailor misinformation or propaganda to sway public opinion or electoral outcomes. The capability of AI to analyze and predict human behavior makes it a powerful tool that can be used both ethically and unethically.
Finally, systemic risks in AI can manifest as discrimination and bias. If AI systems are trained on biased data, they can perpetuate or even exacerbate existing social inequalities. This can lead to unfair treatment of individuals based on race, gender, or socioeconomic status, affecting decisions in areas such as hiring, law enforcement, and credit scoring.
Consequences of Non-Compliance with the Code
Non-compliance with the General-Purpose AI Code of Practice can have significant consequences for AI providers and developers. While the Code itself is not legally binding, it plays a crucial role in guiding organizations towards meeting the obligations of the EU AI Act. Organizations that fail to align with the Code may face increased scrutiny from regulatory authorities. This scrutiny can lead to rigorous inspections and audits to ensure compliance with the broader AI Act provisions.
In cases where non-compliance with the Code indicates a violation of the AI Act, enforcement actions may be initiated. These actions can include fines, restrictions, or even bans on the deployment of AI technologies that do not adhere to the established standards. Thus, organizations not conforming to the Code and the AI Act may encounter legal and financial repercussions.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Additionally, non-compliance can damage an organization's reputation, as adherence to the Code fosters trust and credibility among stakeholders. General-purpose AI models not aligned with the Code may be perceived as risky or unreliable, potentially leading to a loss of business opportunities and partnerships. In an environment where compliance increasingly influences public perception and market viability, neglecting the guidelines set forth by the Code is a strategic risk.
Finally, non-compliance may hinder an organization's ability to innovate and remain competitive, particularly if internal resources are diverted to address regulatory shortcomings instead of forward-looking AI developments. This scenario underscores the importance of embracing the Code's provisions not just as a regulatory checkbox but as a blueprint for sustainable and ethical AI growth.
Key Related Global Events Influencing AI Regulation
Artificial Intelligence (AI) regulation is influenced by various related global events. These events provide context and influence the development and implementation of AI regulations across regions. Key global happenings, such as policy introductions and international agreements, are shaping the future of AI governance.
One significant event impacting AI regulation is the U.S. Executive Order on AI, announced in October 2023. This directive focuses on establishing safety and security standards for AI systems to protect privacy rights. It reflects the importance of governmental oversight in managing AI technologies in the United States, setting a precedent for other nations to follow.
Additionally, the UK AI Safety Summit held in November 2023 marked an important milestone in international AI collaboration. The 'Bletchley Declaration', agreed upon by 28 countries and the EU, emphasizes the necessity for cooperative efforts to address AI risks. This event demonstrates the growing recognition of AI's global impact and the need for unified strategies to manage its development and deployment.
China's adoption of new AI regulations in July 2023 signifies another crucial step towards comprehensive AI governance. By mandating security assessments for AI products before they reach the public, China aims to create a controlled and secure environment for AI innovations. This move showcases a different approach to regulation compared to Western countries but is indicative of a universal trend towards stricter AI oversight.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














The G7 Hiroshima AI Process, launched in May 2023, underlines the commitment of leading industrial nations to promote responsible AI development. Through discussions on establishing international standards and guidelines, G7 leaders expressed their intent to foster a globally consistent approach to AI governance. This initiative reflects the shared vision among major economies to ensure AI's positive impact globally.
Lastly, the OECD AI Principles, although established in 2019, continue to be a relevant influence on AI regulation worldwide. These principles provide a trustworthy framework that complements efforts like the EU AI Act. As AI technologies evolve, these foundational guidelines help maintain regulatory coherence among diverse jurisdictions globally. Each of these related events not only supports the development of AI policies like the EU AI Act but also strengthens international networks for achieving safe and innovative AI progress.
Expert Opinions on the Draft Code
The draft of the General-Purpose AI Code of Practice, released by the European AI Office, represents a pioneering effort to set global standards for AI systems. Not legally binding, this code nonetheless offers AI providers a 'presumption of conformity' with the stringent requirements of the EU AI Act. With its focus on transparency, copyright, and systemic risks, the draft aims to create a flexible yet future-proof framework adaptable to global needs.
Leading experts hold diverse views on the Code's implications. Akin Gump Strauss Hauer & Feld LLP sees it as a supportive guide aiding compliance with the AI Act, praising its global reach. Conversely, TechNet criticizes the draft for being vague and excessively stringent, warning it could deter innovation and impose undue burdens. Freshfields Bruckhaus Deringer appreciates its ambitious transparency demands but highlights its 'comply or explain' approach, while CSIS emphasizes its role in aligning with European standards through international cooperation.
Each stakeholder's stance underscores the complex dynamics of balancing flexible global applicability with specific regional requirements, pointing towards an intricate landscape for AI governance. As debates continue, the Code's final version, expected in 2025, will likely integrate these expert insights, aiming to harmonize innovation with accountability in AI practices.
Public Reactions and Opinions on the EU AI Act
The European Union's Artificial Intelligence Act and its accompanying General-Purpose AI Code of Practice have garnered a range of public reactions and opinions. Not surprisingly, these regulations have sparked considerable interest and debate across various sectors and among the general public. At the heart of the discussions are concerns regarding the balance between fostering innovation and ensuring strict regulatory compliance, which has become a focal point for both supporters and critics.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Proponents of the AI Act commend the EU's proactive stance in setting a regulatory framework ahead of potentially disruptive AI innovations. They argue that such regulations are essential for protecting human rights, civil liberties, and setting a global benchmark for AI governance. This cautious optimism is rooted in the belief that robust regulations can lead to safer and more ethical AI applications, which could eventually enhance public trust in technology.
On the other hand, there is apprehension among some stakeholders about the implications of these regulations. Critics point to the potential for restrictive policies to create loopholes, particularly in law enforcement areas like biometric categorization and emotional recognition. In addition, there are fears of stifling innovation and disproportionately impacting smaller companies and non-EU businesses due to the stringent compliance requirements.
Moreover, the complexity of the Act has led to uncertainty among businesses and public entities about its implementation and enforcement, drawing parallels to the challenges encountered with the GDPR. This has given rise to concerns over whether the EU can maintain technological competitiveness while upholding its rigorous standards.
Despite these concerns, there is a significant portion of the public and experts who believe that the AI Act serves a crucial role in navigating the ethical and practical landscapes of AI technology. The Act's emphasis on transparency and accountability in AI applications resonates with broader societal expectations, fostering a more cautious yet optimistic outlook on AI developments within Europe and globally.
Future Implications for Global AI Governance
The future of global AI governance is poised to be greatly influenced by the EU's AI Act and its accompanying General-Purpose AI Code of Practice. As the European Union continues to spearhead robust AI regulations, the global AI landscape may adapt to mirror these new standards. The comprehensive frameworks set forth by the EU could become a global benchmark, compelling other nations to align their AI governance structures in accordance with these practices. With this move, the EU strengthens its position as a leader in regulating technology, potentially setting a worldwide precedent. However, this shift may also necessitate increased dialogue and collaboration among international stakeholders to ensure harmonized and fair regulatory environments globally.
Economic, Social, and Political Impacts of the AI Code
The implementation of the EU's Artificial Intelligence Act and the associated General-Purpose AI Code of Practice has profound implications on economic, social, and political fronts. Economically, the AI Code is expected to influence companies significantly, with increased compliance costs that could favor larger firms due to their greater resources to adhere to new regulations. This situation might lead to market consolidation, with smaller AI companies struggling to keep up. Furthermore, the fostering of a compliance-focused industry could generate new business opportunities, yet also potentially slow AI innovation within the EU, risking a competitive disadvantage on the global stage.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Socially, the AI Code aims to protect individual rights and privacy within AI applications, which could result in enhanced public trust and wider AI adoption. By emphasizing transparency and systemic risk mitigation, the Code seeks to curb AI-driven discrimination and bias. Improved social safeguards could increase public confidence in AI technologies, positioning them as more reliable tools in everyday life.
Politically, the adoption of the AI Code enhances the EU's standing as a global leader in the regulation of technology. This strategic movement seeks to establish a governance standard that could become a model for global AI practices. Nevertheless, it may also seed tensions with international parties who might perceive these regulations as stringent or protectionist. Such dynamics put pressure on other regions to formulate comparable AI policies, potentially accelerating global cooperation over AI governance.
Technological Advancements Promoted by the Regulation
The European Union's Artificial Intelligence Act and the accompanying General-Purpose AI Code of Practice represent significant regulatory developments aiming to influence AI technology's path globally. These regulatory measures are designed to ensure AI deployment aligns with safety, transparency, and human rights standards. The EU AI Act categorizes AI systems based on risk levels, which determines the obligations for developers and users.
The voluntarily drafted Code of Practice provides guidance for consistency with the Act, aiming for market players to adhere out of a sense of best practice compliance. This approach benefits providers who aim for 'presumption of conformity,' thereby integrating smoothly with the expected regulatory landscape without it being legally binding.
Transparency, copyright issues, and systemic risk considerations such as cybercrime and biases are among the core focal points of the regulation. By addressing these, the regulation aims to mitigate potential risks associated with AI while fostering an environment of innovation within a clearly defined legal framework.
Key informative links between the EU's efforts and international activities can be seen in the recent proliferation of similar AI guidelines, such as those from the US and China. The interplay of these international efforts reflects a growing consensus that AI policy needs a coordinated approach. The Act and the Code are thus seen as potentially pivotal in establishing worldwide AI governance norms, potentially making the EU's model a global template.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














While the initiative is welcomed by many as a stride towards safer technology integration, it also faces critique related to the breadth and enforceability of its provisions, especially concerns over stifling AI innovation and disproportionately impacting non-EU companies.
International Collaboration and Market Dynamics
The "International Collaboration and Market Dynamics" section delves into the multifaceted implications of the EU's AI Act and the emerging General-Purpose AI Code of Practice. It highlights the pivotal role of international cooperation in shaping global AI regulations and the nuanced market shifts precipitated by these evolving rules.
The EU AI Act is a landmark regulatory framework aimed at overseeing the development, deployment, and use of artificial intelligence within the European Union. One of its most groundbreaking components is the General-Purpose AI Code of Practice, which, although not legally binding, provides a "presumption of conformity" for companies that adhere to its guidelines. This code emphasizes crucial areas such as transparency, copyright adherence, and the mitigation of systemic risks, offering a structured pathway for AI providers to align with EU standards.
This regulatory evolution reflects the EU's aspiration to set a global benchmark for AI governance, potentially influencing international norms and regulations. As noted in expert analyses, the Final Code version is expected to be implemented by August 2, 2025, providing a timeline for entities to align their practices with these new standards. Meanwhile, the perceived impacts on market dynamics are profound, wherein the EU's stringent positioning could catalyze shifts in the global AI market landscape, compelling companies to gauge compliance costs and the broader economic implications.
The international response to the EU's AI regulatory initiatives underscores a wider praxis of global AI governance. Significant events such as the U.S. Executive Order on AI, the UK AI Safety Summit, China's AI regulations, the G7 Hiroshima AI Process, and the OECD AI Principles highlight a collective acknowledgment of the need for robust AI oversight. These actions exhibit a concerted global effort to balance innovation with safety and ethical considerations.
Moreover, the dialogues surrounding the EU AI Act point to a growing realization of the need for cross-border regulatory synergy. While the EU's initiative positions it as a potential frontrunner in tech regulation, it also raises questions about the harmonization of these rules with pre-existing international standards. The ambiguities, as pointed out by entities like TechNet, suggest a cautionary navigation through regulatory landscapes, especially for non-EU companies with significant operational footprints in Europe.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Market dynamics are poised to shift as smaller firms might find the compliance landscape daunting, leading to potential market consolidation. However, the introduction of a distinct "EU-compliant" segment could reshape global AI product offerings, establishing a benchmark that other regions might adopt or adapt to suit their regulatory environments. This scenario nurtures the possibility of an evolving marketplace where compliance is not just mandatory but also a strategic differentiator.
In summary, the EU's AI Act and the General-Purpose AI Code of Practice are harbingers of a new regulatory era in artificial intelligence. Their influence is set to extend beyond EU borders, sparking a re-evaluation of international standards and sparking discussions on the future trajectory of AI governance worldwide. As such, stakeholders must remain engaged and adaptive to the unfolding regulatory frameworks shaping this fast-evolving domain.