AI Safety or Tech Innovation?
Trump Administration Under Fire for Dismantling AI Protections
Last updated:

Edited By
Mackenzie Ferguson
AI Tools Researcher & Implementation Consultant
The Trump administration's push to rollback AI safeguards set by Biden has sparked a heated debate between innovation enthusiasts and civil rights advocates. By repealing key executive orders and dismantling federal guidelines, the administration aims to accelerate AI development at the risk of potential discriminatory practices and reduced transparency. The ACLU has actively responded with FOIA requests and calls for Congress to intervene.
Introduction
Artificial Intelligence (AI) stands at the forefront of technological innovation, fundamentally transforming industries and daily life. However, the rapid evolution of AI technology necessitates robust governance to ensure its safe and ethical deployment. In the United States, the landscape of AI regulation has undergone significant shifts, particularly with recent efforts by the Trump administration to dismantle protections established under President Biden. As highlighted in a detailed analysis by the ACLU, these changes focus primarily on accelerating AI development by removing key safety measures and guidelines. This approach prioritizes swift technological advancement, raising critical questions about the balance between innovation and responsible oversight [source].
The repeal of executive orders and the elimination of federal guidelines are central to this deregulatory strategy, a move that has sparked varied reactions within the socio-political sphere. On one hand, proponents argue that these steps will bolster the United States' competitive edge in the global AI race, particularly against technological powerhouses like China, which recently introduced stringent AI testing requirements [source]. On the other hand, civil rights advocates express grave concerns about potential discriminatory outcomes and the erosion of transparency in AI systems. The juxtaposition of these perspectives underscores the complexity of navigating AI policy in a rapidly changing digital landscape [source].
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Globally, nations are adopting varying approaches to AI governance, with the European Union reinforcing its AI regulatory framework by expanding enforcement mechanisms, contrasting sharply with the U.S. approach [source]. Amidst these developments, discussions around AI safeguards are increasingly focused on striking a balance that fosters innovation while mitigating risks such as biased decision-making in critical fields like employment, lending, and criminal justice. The ACLU's proactive stance, including filing FOIA requests and advocating for Congressional oversight, aims to address these concerns by advocating for essential "common sense guardrails" in AI development [source].
Background and Context
The Trump administration's decision to roll back AI protections established under the previous Biden administration marks a significant policy shift with broad implications for technology development and civil rights. By prioritizing rapid advancement over safety, the administration has repealed several key measures aimed at ensuring ethical AI deployment. For instance, Biden's Executive Order on Safe AI Development, which played a central role in outlining safety guidelines for federal AI use, has been dismantled, raising substantial concerns about transparency and accountability in AI systems. This move aligns with a broader deregulatory approach, focusing on accelerating technological innovation without the constraints of federal oversight ().
One of the pivotal motivations behind this deregulatory thrust involves the competitive pressure faced by the United States in the global arena, particularly in relation to China's stringent AI testing requirements and the European Union's increased enforcement of its AI Act. These global regulatory frameworks contrast sharply with the US's current trajectory, underscoring a divide in international AI governance strategies. While some industry leaders argue that reduced oversight could bolster American competitiveness by allowing more flexibility in AI application, significant risks loom, especially in sectors such as hiring, lending, and criminal justice, where biased outcomes could proliferate without proper checks and balances ().
Civil rights groups, including the ACLU, have voiced strong objections to these policy changes, emphasizing the potential erosion of protective measures that guard against discriminatory AI practices. The abolition of transparency and oversight mechanisms is seen as particularly troubling, as these tools are essential for ensuring AI systems operate fairly and without unjust bias. As a response, advocacy groups are actively pushing for congressional oversight and common-sense regulatory frameworks that would balance innovation with necessary safety standards, reflecting their profound worry about the implications of unchecked AI advancement on society ().
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














The Trump administration's actions have also sparked intense debate among tech and business professionals. While some in the industry welcome the reduced regulatory burden, which they argue could streamline AI development, others warn that without established safety protocols, public trust in AI could erode, potentially stifling consumer confidence and long-term growth in the technology sector. This tension highlights a critical challenge: navigating the balance between fostering innovation and maintaining public safety and trust, an issue that continues to fuel discussions across various sectors ().
Key Protections Removed
The Trump administration's decision to remove key AI protections has sparked intense debate and concern among various stakeholders. By repealing the Biden administration's Executive Order on Safe AI Development, the Trump government has eliminated guidelines that were crucial for ensuring the responsible deployment of artificial intelligence technologies. This move includes the removal of federal directives that mandated transparency, testing protocols, and oversight mechanisms, as outlined by the ACLU. Without these safety measures, civil rights groups worry about the heightened risk of biased and discriminatory outcomes in sectors like hiring, lending, and criminal justice.
The dismantling of AI safeguards has profound implications for both the technology industry and the general public. On one hand, proponents argue that deregulation encourages rapid AI advancements and fosters innovation by reducing bureaucratic hurdles that slow down technology deployment. On the other hand, this approach significantly reduces oversight, raising concerns about algorithmic discrimination and reduced accountability for AI-driven decisions. The ACLU has been vocal about the potential risks, urging for the reinstatement of protections that ensure AI technologies are deployed ethically and fairly.
Key protections such as transparency requirements, comprehensive testing protocols, and public safety measures are among the eliminated safeguards. With these measures discarded, numerous questions arise over the accountability and fairness of AI systems, especially in their applications within public and private sectors. Civil rights organizations, including the ACLU, have emphasized the dangers of such regulations being rolled back, which could lead to unchecked AI implementations that potentially harm vulnerable populations.
Critics of the rollback caution that removing these protections can have long-lasting negative effects, particularly around privacy concerns and civil liberties. While major tech companies may benefit from reduced regulatory constraints, the public faces challenges related to discrimination and bias in AI systems without proper surveillance and accountability frameworks in place. The ACLU continues to advocate for the introduction of "common sense guardrails" in AI development to balance innovation with public safety and ethical considerations.
Risks Associated with Deregulation
Deregulation of artificial intelligence (AI) has sparked a significant debate over its potential risks and benefits. One of the major concerns associated with the deregulation of AI is the possibility of discriminatory outcomes. Without stringent regulations and oversight, AI systems may inadvertently perpetuate biases, leading to unfair treatment in critical areas such as hiring, lending, and criminal justice decisions. For example, without proper safeguards, AI-driven hiring processes could favor certain demographics over others, perpetuating existing inequalities and generating further socio-economic disparities.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Moreover, the dismantling of AI protections has raised alarm about reduced transparency and accountability in AI systems. Transparency in AI decision-making processes is crucial for ensuring public trust and mitigating risks associated with AI technologies. With the removal of federal guidelines and testing protocols, there may be a significant increase in opaque AI operations, leading to decreased public confidence and potential backlash against AI innovations.
The deregulation efforts championed by certain political administrations prioritize rapid AI development but pose significant civil rights concerns. The lack of oversight mechanisms can allow for unchecked biases to proliferate within AI systems, impacting millions of users who interact with these technologies on a daily basis. Additionally, the drive for swift deployment over safety can lead to AI systems being rolled out without comprehensive testing, exacerbating the risk of unforeseen negative outcomes.
Furthermore, the absence of clear safety measures and oversight could lead to a fragmented regulatory landscape across different states and regions. As some states move to fill the regulatory void left by federal deregulation, companies operating across these jurisdictions may encounter complex compliance challenges and increased operational costs. The international response, contrasting sharply with U.S. deregulation, highlights a global divergence in AI governance approaches, potentially placing U.S. companies at a competitive disadvantage in markets emphasizing stringent safety standards.
Beneficiaries of Deregulation
The primary beneficiaries of deregulation in the field of Artificial Intelligence (AI) are notably the big tech companies. By dismantling existing protections and safety standards, these companies find themselves with increased room for rapid development and deployment of AI technologies. This newfound freedom allows them to innovate without the lengthy processes associated with compliance and safety checks [1](https://www.aclu.org/news/privacy-technology/trumps-efforts-to-dismantle-ai-protections-explained). This can accelerate their competitive advantage on a global scale, particularly against regions with stringent regulatory frameworks, such as the European Union with its enforced AI Act [1](https://www.euractiv.com/section/artificial-intelligence/news/eu-strengthens-ai-act-enforcement-2025/).
Reduced oversight and minimized safety measures mean that these tech giants can implement AI solutions more swiftly, reaching markets faster and potentially gaining a dominant market position. This landscape is attractive for businesses focused on innovation-first strategies, seeking to leverage technological developments to drive growth and profitability. In an increasingly competitive global market, U.S. companies are under pressure to keep pace with international counterparts like those in China, which have their government enforce stringent, albeit different, AI guidelines [2](https://www.scmp.com/tech/policy/article/china-ai-testing-requirements-2024).
However, the potential benefits for these corporations come with significant caveats. Without adequate oversight, there is a real risk of discriminatory practices emerging within AI systems, affecting everything from hiring processes to criminal justice decisions. This deregulation could inadvertently foster environments where proprietary algorithms operate with reduced transparency, potentially resulting in biased outcomes that go unchecked [1](https://www.aclu.org/news/privacy-technology/trumps-efforts-to-dismantle-ai-protections-explained). As state-level regulations emerge to fill these gaps, tech companies might find themselves grappling with a fragmented regulatory landscape that complicates interstate commerce [4](https://www.statepolicy.org/ai-regulations-2025).
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














While some argue that previous regulations stifled innovation with unnecessary bureaucratic barriers, the lack of standardized measures may lead to increased public skepticism and reduced consumer trust in AI technologies. Companies might ultimately bear higher non-compliance costs as they attempt to self-regulate, implementing proprietary safety protocols in the absence of federal oversight [1](https://www.aclu.org/news/privacy-technology/trumps-efforts-to-dismantle-ai-protections-explained). This dynamic can have profound implications for the market, possibly affecting both adoption rates and long-term viability of AI technologies if consumer trust wanes [10](https://natlawreview.com/article/ai-under-second-trump-administration-ai-washington-report).
Looking ahead, the disparity between federal and state regulations poses another layer of complexity. While deregulation may offer immediate advantages, the potential for inconsistent policies across states could lead to operational challenges and increased costs for companies navigating a maze of regional laws. The international community's reaction to these deregulatory moves, especially allies concerned with AI safety standards, could further influence both political and market dynamics [5](https://www.theguardian.com/technology/2025/feb/global-ai-summit-us-absence).
ACLU's Response and Actions
In response to the Trump administration's active dismantling of AI protections, the American Civil Liberties Union (ACLU) has undertaken several proactive measures to address these changes and uphold civil rights in the realm of AI technology. The ACLU has filed multiple Freedom of Information Act (FOIA) requests to scrutinize the extent of data access and usage under the new deregulatory policies. By advocating for Congressional oversight, the organization aims to ensure that the legislative branch remains vigilant in safeguarding against potential civil rights infringements brought on by unregulated AI systems. The ACLU has also been vocal about the necessity of maintaining safety standards, while simultaneously supporting technological innovation. This advocacy reflects a balanced approach, emphasizing the importance of "common sense guardrails" in AI development to protect against discriminatory outcomes in various sectors such as employment, lending, and justice.
The potential risks associated with the removal of AI protections have driven the ACLU to increase its efforts in public awareness and policy advocacy. Given the administration's focus on rapid AI deployment without robust oversight mechanisms, the ACLU's push for transparency and accountability in AI technologies has become more urgent. Their actions include highlighting the risks of biased AI systems to the public and policymakers alike, stressing how unchecked AI can adversely affect marginalized communities. Furthermore, the ACLU's advocacy extends to supporting state-level initiatives where local governments are implementing stricter AI regulations to counterbalance the federal government's deregulatory stance. By engaging in these activities, the ACLU aims to forge a path for ethical AI usage grounded in civil liberties and human rights.
The ACLU's response is situated within a broader context of international concerns regarding AI governance. As the Trump administration opts for a deregulated approach, the disparity between U.S. policies and those of other regions, such as the European Union and China, grows more pronounced. The organization has pointed out the potential long-term implications of this divergence, noting that it might pose challenges for American companies operating globally. Alongside urging national reforms, the ACLU emphasizes the importance of aligning U.S. AI policies with global safety standards to prevent isolation from international efforts in AI governance. Through these multilayered efforts, the ACLU seeks to influence both national discourse and international cooperation, advocating for an AI policy framework that respects rights and promotes ethical development practices.
Eliminated Safeguards
The dismantling of AI safeguards by the Trump administration marks a significant shift in the approach to artificial intelligence regulation in the United States. These changes include the repeal of executive orders on AI safety, originating from the previous administration, that emphasized careful and secure AI development. By removing essential federal guidelines and memos related to artificial intelligence, the focus has shifted towards expediting AI technology's advancement without the previously established checks and balances. Such actions have sparked considerable debate among stakeholders concerned about potential risks, such as discriminatory outcomes and reduced oversight, which might accompany this deregulatory push ().
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














With the elimination of critical safeguards, the AI landscape now presents both opportunities and challenges. On one hand, proponents of deregulation argue that reduced governmental oversight could expedite AI advancements, allowing the United States to maintain a competitive edge in the global AI race. However, on the other hand, civil rights advocates express concern over the societal impacts of unchecked AI systems, especially in sensitive sectors like hiring, lending, and criminal justice. The removal of safety protocols and transparency measures raises the possibility of AI systems perpetuating or even exacerbating biases, thus endangering vulnerable populations and undermining public trust ().
The impacts of eliminating AI safeguards have a ripple effect that extends beyond national boundaries. As the U.S. adopts a more laissez-faire stance on AI governance, other countries like the European Union are doubling down on rigorous AI regulations, potentially leading to a fragmented global approach to AI ethics and safety. This divergence not only poses challenges for multinational corporations operating across different jurisdictions but also risks creating an international environment where safety standards and innovation timelines vary significantly. Critics fear this could lead to an imbalance where rapid technological progress is not matched by adequate ethical considerations ().
Global Reactions to Deregulation
The global response to the Trump administration's AI policy shift towards deregulation has been varied and multifaceted. While the U.S. has loosened AI development constraints, allowing big tech companies to accelerate their efforts unhindered by previous safety protocols, international counterparts have taken a decidedly different path. The European Union, for example, has ramped up its commitment to AI safety by reinforcing enforcement budgets and broadening the reach of their AI Act. This move underscores a vast divergence in policy approaches, emphasizing safety and regulation over unchecked technological growth. Meanwhile, China has implemented stringent new AI testing protocols, mandating a government review of algorithms prior to their deployment. This global divergence highlights varying national priorities concerning innovation, security, and public trust in AI systems. As countries navigate these complexities, the dialogue around AI governance continues to evolve, reflecting broader geopolitical tensions and economic strategies. With these differing approaches, the international community finds itself at a crossroads, debating the balance between innovation and regulation in the global tech landscape.
The shift in U.S. policy, prioritizing rapid AI development, has sparked diverse global reactions, ranging from support in industry circles to criticism from civil rights and privacy advocates. In the U.S., tech giants like Microsoft, Google, and Meta have responded to deregulation by voluntarily forming the 'Responsible AI Alliance,' pledging adherence to safety standards despite the lack of federal oversight. This coalition underscores a commitment to ethical AI development, reflecting industry recognition of the importance of maintaining public trust and addressing potential risks inherent in AI technology. Meanwhile, at the state level, regions like California and New York have introduced comprehensive AI regulations to counterbalance federal rollbacks. These actions suggest a fragmented domestic landscape where state policies play a crucial role in shaping AI governance. Internationally, the response has been critical, with the Global AI Safety Summit participants expressing concern over America's shift, highlighting fears of increased risks without stringent safeguards. This ongoing debate illustrates the complexity of aligning domestic and international regulatory frameworks within the fast-evolving AI sector.
On the social front, the regulatory changes have ignited intense discussions worldwide. Public interest groups and civil rights organizations within the U.S. have taken to social media to voice concerns about the potential for increased bias and discrimination in AI systems, particularly in crucial areas such as employment, lending, and criminal justice. These debates mirror broader societal concerns about the transparency and accountability of AI technologies. Additionally, the international community is closely monitoring these developments, as the deregulatory stance may foster a competitive edge for U.S. companies but at the cost of amplifying ethical and social challenges associated with AI. The divergence between U.S. and international policies may lead to increased regulatory fragmentation, complicating the operations of global companies and potentially setting the stage for transnational conflicts over AI standards. This complex and rapidly evolving scenario underlines the critical need for ongoing dialogues between nations to harmonize AI policies, ensuring both technological advancement and protection of fundamental rights.
Expert Perspectives
The Trump administration’s recent efforts to deregulate AI protections highlight a significant shift in the U.S. policy landscape, sparking a broad spectrum of expert opinions. On one hand, proponents of the innovation-first perspective argue that such deregulation will enhance competitiveness and spur technological growth in the U.S. These experts believe that removing bureaucratic barriers allows for more rapid development and deployment of AI, positioning the U.S. as a leading innovator on the global stage. They highlight the need for the private sector to take the helm, leveraging existing discrimination laws to guard against potential biases .
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Conversely, advocates of the civil rights protection perspective express profound concern over the potential fallout from reduced oversight. They caution that the removal of federal AI guidelines could lead to significant gaps in checks and balances, making way for biases in systems affecting essential sectors like hiring, lending, and criminal justice. Legal experts, such as those at the ACLU, underscore the risks of discrimination and decreased transparency, urging the need for strong regulatory frameworks to safeguard public interests .
This policy shift also intersects with international trends, where contrasting approaches are visible. For instance, while the EU is strengthening its AI regulatory frameworks, the U.S.'s move towards deregulation could create conflicts, especially for multinational corporations that must navigate these discrepancies. Analysts worry that this could result in regulatory fragmentation, particularly at the state level, where some states may establish their guidelines to fill the federal void .
Public Reactions
The public response to the Trump administration's AI deregulation efforts is deeply polarized, reflecting a broad spectrum of societal values and priorities. On one side, civil rights advocates and privacy organizations have vocally opposed the removal of key safety measures, expressing concerns about increased risks of discrimination in areas such as employment, lending, and the criminal justice system. These groups emphasize the importance of maintaining transparency and accountability in AI systems to prevent the erosion of civil liberties and public trust. Social media has become a vibrant battleground where these issues are hotly debated, with many users advocating for the reintroduction of oversight mechanisms to safeguard against potential abuses caused by unregulated AI technologies [source].
In contrast, segments of the tech industry and some business leaders have embraced the deregulation, seeing it as an opportunity to accelerate innovation and maintain a competitive edge on the global stage. They argue that less regulatory interference can drive faster development and deployment of AI technologies, which could boost economic growth and lead to technological advancements. However, this perspective is not universally held within the industry, as others worry that a lack of safety protocols could undermine consumer trust and trigger a backlash against AI adoption, potentially harming long-term prospects for the sector [source].
The general public's reaction varies widely, with concerns centering around the implications of unchecked AI capabilities. Online forums and communities are filled with discussions about the potential for AI systems to perpetuate existing biases and the challenges associated with holding developers accountable for AI-related issues. Many fear that without adequate oversight, AI technologies may infringe on personal freedoms and privacy, leading to a landscape where consumer protections are weakened. Public interest groups have responded by launching awareness campaigns, seeking to mobilize citizens in demanding robust regulatory frameworks to govern AI deployment responsibly [source].
Implications for the Future
The Trump administration's decision to dismantle AI protections initially established under Biden carries significant implications for the future of technology and its regulation in the United States. With the removal of key safeguards and oversight mechanisms, the landscape of AI development and deployment is poised for dramatic change. The absence of federal guidelines creates an environment where major tech companies could rapidly advance AI technologies without mandatory checks on potential biases or errors. This poses a dual-edged sword: on one hand, it might position the U.S. as a leader in swift AI innovation; on the other, it risks exacerbating systemic issues such as discrimination in employment and lending, biased judicial decisions, and reduced algorithmic transparency [news source](https://www.aclu.org/news/privacy-technology/trumps-efforts-to-dismantle-ai-protections-explained).
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Internationally, the U.S.'s shift towards deregulation could alter global dynamics in AI governance. As the EU strengthens its AI regulations and China imposes rigorous testing protocols, a disparity in standards might lead to competitive tensions across borders. This divergence could challenge international companies operating in multiple jurisdictions, forcing them to navigate a complicated web of varying local laws and standards. Meanwhile, domestic tech giants may need to bolster their own safety protocols in the absence of federal oversight, potentially increasing operational costs and impacting their global market strategies.
Politically, the removal of AI safeguards under the Trump administration is likely to heighten polarization within the U.S., with civil rights groups and some state governments pushing back against the rollback of regulations. As states like California and New York draft their own robust AI laws to counteract federal deregulation, there is potential for a fragmented approach to AI policy across the country, complicating the operational landscape for AI developers. Additionally, public advocacy and pressure from consumer rights organizations may push for a renaissance in AI regulatory discussions, possibly spearheading new legislation aimed at reinstating comprehensive AI protections [news source](https://www.aclu.org/news/privacy-technology/trumps-efforts-to-dismantle-ai-protections-explained).
Socially, the implications of deregulated AI development could result in growing public distrust and hesitation towards AI technologies, especially if high-profile instances of AI-related discrimination or privacy violations occur. The acceleration of AI systems, untempered by regulatory oversight, raises concerns about transparency and accountability, potentially leading to a crisis of confidence not just in AI systems but in the institutions that deploy them. As public awareness increases, demand for ethical AI practices and open regulatory frameworks may become more pronounced, influencing both regional and national policy directions.
In summary, while the Trump administration’s efforts to dismantle AI protections may stimulate short-term advancements and efficiencies in AI technology, they also open doors to significant risks and challenges. These changes not only impact technological innovation but also have profound economic, social, and political consequences, requiring careful navigation to ensure that the benefits of AI are realized without compromising on safety and equity. Ongoing discussions and actions from advocacy groups, state legislatures, and international entities will play crucial roles in shaping the future landscape of AI development and regulation.
Conclusion
In summary, the Trump administration's rollback of AI safety measures has sparked a vigorous debate on the balance between innovation and regulation. The swift dismantling of protections established during Biden's tenure raises significant questions about the future of AI governance and its implications for both the technology sector and society at large. On one hand, the reduction in federal oversight may facilitate rapid deployment and potentially propel the U.S. to the forefront of AI development. On the other, it poses considerable risks, including potential biases and a lack of accountability that could affect everything from hiring decisions to criminal justice outcomes, thereby exacerbating societal inequalities and reducing public trust in AI systems.
While the administration's approach is championed by tech industry leaders eager to bypass bureaucratic barriers, it has been met with substantial criticism from civil rights groups and policymakers concerned about the potential for discriminatory outcomes and reduced transparency. Advocates like the ACLU are actively pushing for legislative oversight and common-sense safety standards to bridge these gaps and maintain consumer protection while still fostering innovation.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














At the international level, the U.S.'s strategy contrasts starkly with more cautious approaches taken by other regions like the EU and China, which are implementing stringent safety and testing measures. This divergence could contribute to increased geopolitical tensions and competitive pressures, as nations race to define the future landscape of AI.
The future of AI under such deregulatory policies remains uncertain, with potential for both technological advancement and societal challenges. As states like California and New York begin implementing their own regulations, a complex mosaic of state and federal guidelines could emerge, potentially leading to regulatory fragmentation and industry uncertainty. The coming years will be critical in determining how these policies will shape the evolution of AI and its impact on society.