Choosing People Over Pixels
AI Rights: Futuristic Fantasy or a Dangerous Distraction?
Last updated:

Edited By
Mackenzie Ferguson
AI Tools Researcher & Implementation Consultant
Machines can't feel or think, but humanity should — and must debate the pivot toward AI rights as a distraction from critical human and environmental issues. This article challenges the anthropomorphizing of AI and calls for a focus on more pressing rights and regulatory needs.
Introduction: Reassessing AI Sentience and Rights
The debate surrounding AI sentience and rights has gained momentum in recent years, propelled by rapid advancements in artificial intelligence technologies. Proponents argue that, as AI systems become more sophisticated, they may attain levels of consciousness or awareness warranting certain moral considerations. However, the concept of AI sentience remains largely speculative and is heavily criticized for diverting essential resources and attention away from urgent human rights and environmental issues. Critics emphasize that AI systems, despite their complexity, are fundamentally tools engineered by humans and lack the intrinsic qualities of sentience experienced by living beings.
One of the primary concerns in the reassessment of AI rights is the anthropomorphization of AI systems, where human-like qualities are wrongly attributed to machines. This trend, often driven by sensational media and marketing strategies, can cloud public understanding and shift accountability away from developers and corporations. By treating AI as sentient entities, society risks undermining human-centered approaches to ethics and governance, as highlighted in a pivotal article available here.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














The anthropomorphization of AI not only distorts perceptions but also raises ethical and socio-political concerns. By focusing on machine rights, there is a potential to deflect from the tangible impacts and responsibilities related to human welfare and environmental stewardship. A critical perspective presented in this article suggests that emphasizing AI rights can lead to deregulatory trends, potentially exacerbating existing social injustices and facilitating the misuse of technology.
Concerns about AI sentience and rights are also intricately linked to broader philosophical and policy debates. As discussions evolve, there are calls for a shift from speculative philosophical inquiries to pragmatic policy-making that genuinely prioritizes human and ecological well-being. The discourse suggests that by adjusting our focus from theoretical AI rights to practical human rights, society can more effectively address the significant challenges posed by AI technologies. This includes formulating comprehensive regulations to mitigate adverse implications on ethics, privacy, and socio-economic inequalities, as explored in-depth here.
Taking into consideration the current trajectory of AI technologies, it becomes evident that a reassessment of AI sentience and rights necessitates a balanced approach. Experts like Luciano Floridi advocate for comprehensive regulations that can provide a framework to manage the ethical and societal risks associated with AI. Developing ethical guidelines and legal standards, not for machines but for the use of these technologies, ensures that focus remains on upholding human dignity and environmental integrity. The ongoing dialogue continues to be shaped by the pressing need to reevaluate priorities in AI ethics and policy, reinforcing the understanding that regulation, rather than rights allocation, is what fundamentally steers positive outcomes.
The Case Against AI Sentience – A Distraction from Human Rights
The debate around AI sentience and the granting of rights to such technologies has been a highly controversial topic in contemporary discussion. While advancements in AI continue to transform various sectors, there's a growing concern that prioritizing AI rights could overshadow critical human-centric issues. The central argument presented in the article "Machines Cannot Feel or Think, but Humans Can, and Ought To" is that framing AI as sentient detracts from pressing human rights and environmental concerns. The focus should instead be on developing regulations that hold companies accountable for the societal impacts of AI rather than anthropomorphizing the technology as a means to shift responsibility away from creators and developers.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














There's a strong case to be made that attributing sentience to AI not only distorts public perception but also presents a potential risk of undermining human rights efforts. As the article highlights, AI is fundamentally a tool—a product of human ingenuity—not a sentient being. This understanding is crucial as it shapes the regulatory landscape that governs AI deployment and integration into society. Ignoring this could lead to policies that inadequately address significant technology-related challenges such as bias, privacy infringement, and inequality.
On this matter, prominent experts like Luciano Floridi argue for robust regulatory frameworks that prioritize human and environmental rights over hypothetical machine rights. Such frameworks should encompass ethical guidelines, technical standards, and legal oversight to ensure that the deployment of AI benefits humanity at large without inadvertently causing harm. These considerations are critical given the growing influence of AI on economic, social, and political systems, where its misuse or over-anthropomorphization could lead to economic resource misallocation, increased social accountability dodges, and weakened regulatory practices.
In addition to reallocating focus away from machine rights, there must be concerted efforts on addressing the real societal harms stemming from AI. This includes the exacerbation of socio-economic inequalities, job displacement, and the dissemination of misinformation, all of which require targeted interventions. The narrative that supports AI sentience could likely skew resource allocation, further delaying progress in tackling these challenges effectively. It's essential to create a balanced discourse that acknowledges the transformative power of AI while remaining vigilant about the risks and reinforcing the primacy of human agency and accountability.
Anthropomorphism: The Misleading Narrative in AI Development
The term anthropomorphism refers to the attribution of human characteristics, emotions, and intentions to non-human entities, including artificial intelligence. In the realm of AI development, anthropomorphism presents a misleading narrative, as it frames AI systems as entities capable of thought and emotion. This portrayal can lead to misconceptions about AI's true nature, potentially affecting the way AI is regulated and perceived by the public. By treating AI as more human-like than it is, we risk overshadowing the importance of its role as a tool, fundamentally guided by human programming and data manipulation.
A significant consequence of anthropomorphizing AI is the distraction from pressing human rights and environmental issues. By focusing on the supposed sentience of AI, there is a risk of diverting critical resources and attention away from problems that directly impact human well-being and ecological sustainability. The narrative that AI might possess emotions or consciousness can skew public and policy-maker opinions, leading to misplaced priorities in research and development investments. This shift could, in turn, weaken efforts to mitigate the socioeconomic impacts of AI, such as job displacement and privacy erosion.
Tech companies might leverage the anthropomorphizing of AI as a strategy to evade responsibility for the societal impacts of their technologies. By attributing human-like qualities to AI, these companies could deflect criticism and accountability, framing negative outcomes as the 'behavior' of AI, rather than the result of corporate decisions. This tactic highlights the broader issue of accountability in AI development, emphasizing the need for robust ethical and regulatory frameworks to ensure that companies remain answerable for how their technologies are used and their broader societal implications.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Moreover, anthropomorphism can lead to a public sentiment that places undue emphasis on AI rights, overshadowing the need for human and environmental protections. The philosophical debate surrounding AI consciousness can consume valuable discourse space, reducing the urgency of addressing tangible human rights concerns and environmental degradation. This focus could inadvertently promote deregulation, as the perceived "rights" of AI may conflict with necessary restrictions designed to protect human interests and societal values. It is critical to maintain the view of AI as a powerful tool that requires carefully considered boundaries and oversight.
Post-Humanism and Interconnectedness: Moving Beyond Technology-Centric Views
In recent years, discussions around post-humanism have gained prominence, challenging traditional technology-centric views that exalt digital advancements as the pinnacle of progress. Post-humanism encourages a shift in focus from technology toward a holistic understanding of interconnectedness within the human and natural worlds. This perspective acknowledges that while technology, including AI, plays a significant role in modern society, it should not eclipse the importance of human rights and environmental stewardship. The belief that technology alone can solve existential challenges is increasingly seen as narrow-minded [1](https://www.techpolicy.press/machines-cannot-feel-or-think-but-humans-can-and-ought-to/).
Post-humanism invites us to re-evaluate our relationships with technology, advocating for an equilibrium where the use of AI and other technologies complement rather than dominate human affairs. The view that human progress is intrinsically linked to technological evolution fails to consider the complex web of socio-environmental connections that sustain life on Earth. By moving beyond a technology-centric worldview, we can prioritize sustainable practices, ethical considerations, and policies that foster human and ecological well-being, thereby reinforcing a shared vision for the future that includes, but is not dictated by, technological advancement [1](https://www.techpolicy.press/machines-cannot-feel-or-think-but-humans-can-and-ought-to/).
One of the central tenets of post-humanism is the emphasis on interconnectedness — the idea that humans are part of a larger, intricate ecosystem rather than isolated entities. This perspective encourages a departure from anthropocentrism and challenges the over-reliance on AI to address human issues. As outlined in the article "Machines Cannot Feel or Think, but Humans Can, and Ought To," the anthropomorphizing of AI can be seen as a distraction that shifts focus away from pressing human and ecological concerns, thereby diluting accountability and hindering genuine progress [1](https://www.techpolicy.press/machines-cannot-feel-or-think-but-humans-can-and-ought-to/).
The concept of interconnectedness within post-humanism also underscores the responsibility humans have toward one another and the environment. By understanding our place within a network of relationships, we can better appreciate the limits of technology and the fundamental humanistic values that should guide AI development and usage. This approach aligns with calls for a more measured and ethical integration of AI into society, where human and environmental needs are prioritized over unchecked technological expansion [1](https://www.techpolicy.press/machines-cannot-feel-or-think-but-humans-can-and-ought-to/).
Philosophy vs. Policy: Clarifying AI's Role in Society
The complex interaction between philosophy and policy in determining AI's role in society presents significant challenges. Philosophical debates often probe the abstract possibilities of AI, such as the contentious notion of AI consciousness, which some fear might eventually divert attention from more pressing human concerns. Such discussions, while intellectually stimulating, are argued by some like Bryson and Floridi to potentially risk overshadowing tangible policy needs that address human rights and societal well-being. This philosophical exploration, without grounded policy considerations, risks anthropomorphizing AI—imbuing machines with human-like qualities—which can obscure the reality that AI is ultimately a tool created and controlled by humans. As articulated by Joanna Bryson, this misunderstanding facilitates corporate evasion of accountability, particularly regarding AI systems' social and ethical impacts. The philosophical allure of AI sentience must not mislead policy frameworks, which require a focus on addressing real harms perpetuated by AI systems, rather than hypothetical rights of AI, a stance corroborated by the urgent calls for regulation emphasized by ethical theorist Luciano Floridi. For effective societal integration of AI, it is crucial that philosophical musings enhance, rather than impede, the establishment of robust policies that prioritize human and environmental interests.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Policy, guided by the informed insights of philosophy, underscores the practical dimensions of integrating AI into society responsibly. Distinguishing between philosophical speculation and policy application is essential in ensuring that AI serves as a beneficial tool without detracting from human rights and environmental objectives. While philosophical inquiries into AI’s capabilities can spark innovation, they must be tempered with practical policy measures that hold developers accountable, prevent bias, and promote transparency. This is particularly relevant in the context of exaggerated claims about AI sentience, which can foster a 'hype machine' that diverts legislative attention away from meaningful regulation and societal safeguards. By focusing policy on such regulation, we uphold the principle that, regardless of philosophical discourse, AI remains devoid of consciousness and cannot supersede human agency or ethical responsibilities. More than ever, the intersection of philosophy and policy must navigate towards sustainable AI deployment, addressing biases, and ensuring that the technology augments human capability rather than undermines it, supporting Floridi’s advocacy for comprehensive regulatory frameworks that balance benefits with ethical considerations.
The Economic Implications of AI Rights Advocacy
The growing advocacy for AI rights presents significant economic challenges and diversions from pressing human-centered economic goals. The emphasis on AI sentience and rights could potentially reallocate essential resources, such as funding and research, away from pressing economic issues that directly affect human lives. For instance, the resources spent on AI rights could otherwise be utilized to address problems related to poverty, access to healthcare, and educational inequalities, thereby hindering progress in improving human well-being [source].
Furthermore, the focus on AI rights has the potential to divert attention from the development and deployment of green technologies, crucial for tackling environmental challenges. As these technologies are vital for mitigating the adverse effects of climate change and resource depletion, their slowed development could lead to increased environmental degradation and long-term economic costs. This calls into question the economic wisdom of prioritizing AI rights over tangible and immediate human and environmental concerns [source].
There's also an indirect economic implication in the tendency to anthropomorphize AI, which might lead to inadequate corporate accountability for AI-induced job displacement. As companies embrace automation facilitated by AI, the job displacement could exacerbate social and economic inequalities, especially if the anthropomorphization of AI allows these companies to evade responsibility for the consequences of automation [source]. In the broader economic landscape, this could incite social unrest rooted in increased unemployment and economic disparity.
Societal Consequences: AI Rights Impact on Human Empathy and Social Justice
The debate over the implications of AI rights on human empathy and social justice has become increasingly pertinent as technology continues to evolve. While technology offers significant advancements, it also raises questions about our societal values and priorities. The ongoing discussions around AI rights often involve concerns about diverting attention from critical human-centric issues. Advocates for prioritizing human rights, such as those expressed in the article "Machines Cannot Feel or Think, but Humans Can, and Ought To," argue that attributing rights to AI might lead to neglecting genuine human suffering and environmental crises. By shifting focus to AI sentience, we risk overlooking critical immediate needs and the potential exacerbation of societal inequalities. Read more.
The notion of AI rights can be seen as a reflection of broader societal challenges related to empathy and justice. As noted in the article from Tech Policy Press, the consideration of machine rights tends to overshadow urgent humanitarian priorities, and this shift in focus could lead to a devaluation of human empathy. By anthropomorphizing AI, society may inadvertently foster a reduction in genuine human connections, as these human-like models become substitutes for real relationships. Such a shift can erode the fabric of social justice, where empathy and support for human welfare are foundational. The potential for AI to distract from meaningful societal progress underscores the importance of maintaining a clear human focus in technological advancements Read more.
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Moreover, the societal consequences of granting AI rights extend into the realms of social justice. With the risk of anthropomorphizing machines, we might create a diversion from pressing social challenges, such as inequality and access to technology. When machines are given attention and resources, it often comes at the expense of marginalized groups who are already at a disadvantage in accessing technological advancements. Discussions on AI rights can obscure efforts to bridge the digital divide or address bias inherent in AI systems that can perpetuate injustice rather than alleviate it. The societal consequences of these priorities highlight the necessity for policies that prioritize equitable access and the ethical development of AI Read more.
The article also touches on the critical need for AI regulation in safeguarding social justice and protecting human empathy. Luciano Floridi, a noted philosopher, emphasizes the importance of comprehensive regulatory frameworks to mitigate the potential adverse effects of AI on society. This includes addressing issues such as job displacement, privacy erosion, and bias amplification. Without effective regulation, the focus on AI rights might undermine efforts to hold corporations accountable for the societal impact of their technologies. Instead, a balanced regulatory approach that prioritizes human rights and ethical standards is paramount in ensuring that AI serves humanity's best interests, rather than merely technological advancement for its own sake Read more.
In conclusion, the consideration of AI rights has profound implications for human empathy and social justice. The debate is not just a matter of technological ethics, but a reflection of our societal values and priorities. By focusing on AI over human concerns, we risk neglecting those most affected by technological advancements and extractive AI industry practices. The call for a shift from machine rights to human and environmental rights is a crucial one, advocating for a future where technology enhances human life without compromising our fundamental capacities for empathy and justice Read more.
Political Challenges in AI Regulation and Accountability
The regulation of artificial intelligence (AI) and its associated accountability is fraught with political challenges. While AI promises significant advancements across various sectors, it also raises complex ethical and societal questions that demand careful consideration from policymakers. One of the prominent political challenges is the lack of consensus on how to treat AI systems, particularly concerning issues of sentience and rights. Some advocates, influenced by concepts like Longtermism, prioritize potential benefits for future generations that may involve advanced AI. This prioritization often comes at the expense of addressing current human rights and environmental concerns, leading to contentious political debates [1](https://www.techpolicy.press/machines-cannot-feel-or-think-but-humans-can-and-ought-to/).
Furthermore, there is a growing concern over the anthropomorphizing of AI, which can obscure responsibility and accountability. Tech companies may exploit this to shift focus away from the societal impacts of their products and avoid stringent regulations. The anthropomorphization not only complicates public understanding of AI's capabilities but also enters the political arena, influencing policy discussions in misleading ways. This manipulation of narrative is a significant obstacle for regulators striving to implement frameworks that ensure ethical AI use while safeguarding human and environmental rights [1](https://www.techpolicy.press/machines-cannot-feel-or-think-but-humans-can-and-ought-to/).
The geopolitical landscape further complicates AI regulation. International competition to dominate AI technology can lead to fragmented approaches where countries may prioritize competitive advantage over collaborative development of ethical standards. This scenario creates regulatory loopholes that companies may exploit, exacerbating issues like surveillance and bias, which undermine democratic processes and amplify inequalities [3](https://promiseinstitute.law.ucla.edu/symposium/human-rights-and-artificial-intelligence/). As Luciano Floridi suggests, comprehensive regulation involving ethical guidelines, technical standards, and legal frameworks is crucial to address these multifaceted challenges effectively [2](https://www.oii.ox.ac.uk/people/profiles/luciano-floridi/).
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.














Political discourse concerning AI also risks becoming increasingly polarized. As debates about AI rights versus human and environmental rights intensify, there is a likelihood of deeper divisions within political factions. Such polarization hinders the development of coherent policies and can stall necessary regulatory measures. The conflict between prioritizing AI advancements and focusing on immediate human needs necessitates diplomatic discourse and compromise to forge sustainable AI policies [1](https://www.techpolicy.press/machines-cannot-feel-or-think-but-humans-can-and-ought-to/).
Conclusion: Prioritizing Human and Environmental Rights in the Age of AI
As we advance further into the era of artificial intelligence, there is an urgent need to reassess our priorities, emphasizing the protection of human and environmental rights. The article "Machines Cannot Feel or Think, but Humans Can, and Ought To" highlights how misdirected focus on AI rights can detract from these primary concerns. AI, though sophisticated, remains a product of human creation, devoid of consciousness, and efforts to imbue it with sentient rights only serve to muddle essential regulatory and ethical discussions. As such, the argument for prioritizing human and environmental rights over AI rights is both timely and crucial ().
Moreover, anthropomorphizing AI obscures the responsibility that both developers and companies must bear regarding the unintended societal impacts of AI technologies. By treating AI as if it were capable of human-like thought and emotions, the real issues of surveillance, bias, and privacy erosion in AI deployment are sidelined. As posited by experts like Joanna Bryson, attributing human characteristics to AI masks the fact that these systems are mere tools, a reality that must not be lost when considering accountability and ethical norms ().
In aligning AI advancements with the broader objective of human rights protection, we should also heed Luciano Floridi's call for comprehensive regulation. This involves crafting policy frameworks that not only address potential AI benefits but also rigorously mitigate risks, such as job displacement and compromised freedoms, which often accompany tech progress. Effective AI regulation should prioritize human safety and the safeguarding of environmental integrity, ensuring technology serves humanity's most pressing needs ().
The societal movement towards AI rights should not overshadow our commitment to enhancing human welfare and ecological health. As the article suggests, shifting our focus from futuristic AI sentience debates to actionable human rights and environmental crises is essential. Addressing these priorities not only fosters sustainable technological innovation but also fortifies societal resilience against the unforeseen aftershocks of AI proliferation ().