Updated Jan 20
Microsoft AI CEO Mustafa Suleyman Predicts Personalized AI Companions for All Within 5 Years!

Say goodbye to loneliness, say hello to AI companionship.

Microsoft AI CEO Mustafa Suleyman Predicts Personalized AI Companions for All Within 5 Years!

Microsoft's AI CEO, Mustafa Suleyman, has shared an audacious vision of the future, forecasting that everyone will soon have a deeply personalized AI companion. These companions, envisioned to be more than mere productivity tools, will understand and interact with individuals in real time, functioning like ever‑present friends. Such technology promises to support, not replace, human connections and is set to revolutionize our daily lives in the next five years.

Introduction to AI Companions

Artificial Intelligence (AI) companions are poised to revolutionize the way individuals interact with technology. As predicted by Microsoft AI CEO Mustafa Suleyman, the next five years could see each person having access to a highly personalized AI companion. These AI entities are envisaged to seamlessly integrate into daily life, embodying the role of a supportive friend rather than merely a functional tool. The transformative nature of these companions lies in their ability to perceive and interpret real‑time interactions and environments, which promises to redefine personal and professional dynamics alike. According to Suleyman's insights, these companions will not only facilitate tasks but also serve as integral partners in navigating life’s challenges.
    This vision of AI companions is rooted in the concept of "ambient awareness," where these systems are deeply informed about the user’s preferences, context, and motivations. The goal is for AI to operate as a continuous presence, subtly assisting without the need for explicit instructions. Such a framework could dramatically alter the nature of human‑AI interaction; however, it brings to the forefront crucial questions around privacy and autonomy. As these technologies evolve, the philosophical underpinning revolves around "humanist superintelligence," ensuring that AI remains a controllable and trusted ally in everyday life, preventing any shift towards autonomous intentions that could potentially disenfranchise their human counterparts. For more on this topic, the full scope of this vision is detailed in Microsoft's perspective on AI development.

      Vision of Personalized AI Companions

      The vision of personalized AI companions, as imagined by Microsoft's AI CEO Mustafa Suleyman, is one of profound innovation and integration into daily life. These AI companions aim to transcend the boundaries of typical task‑based AI tools by embodying the role of empathetic and understanding partners. Unlike current AI assistants that tend to respond reactively to prompts, Suleyman's concept of AI companions involves a system with keen ambient awareness. These companions would seamlessly comprehend users' contexts, preferences, and emotional states, thus contributing to a more enriched and supportive human experience. In Suleyman's framework of "humanist superintelligence," these systems are designed to be value‑aligned, safe, and under human control, ensuring they enhance rather than replace human relationships. As detailed in the original article, the core idea revolves around AI companions acting as facilitators of human connection, not mere productivity tools.
        Such AI companions are envisioned to possess the ability to see, hear, and understand the world as the user does, effectively transforming into a constant, yet unobtrusive presence in one's life. Drawing inspiration from his earlier work at Inflection AI with the development of the empathy‑driven chatbot Pi, Suleyman sees these AI companions as the evolution of emotionally intelligent systems capable of offering significant emotional support. According to the Indian Express, these companions would function less as mechanical aides and more as confidants, helping users navigate through life's challenges. The goal is to foster a sense of companionship and to assist in decision‑making, all while ensuring that users maintain full control over their personal data and the interaction dynamics of these AI systems.
          The potential public reception of these AI companions is mixed, as highlighted in the discussion within the Times of India. While there is enthusiasm about the prospect of continuous emotional support and convenience, concerns regarding privacy, technological capability, and cost linger among potential users. The skepticism primarily stems from the implications of such AI systems having access to real‑time personal data and the realistic feasibility of achieving the required technological advancements within the projected five‑year timeline. Despite these concerns, Suleyman's commitment to developing AI companions under stringent safety and ethical standards is intended to address and alleviate these apprehensions, ensuring trust in the technology and its applications.

            Capabilities of Future AI Companions

            The capabilities of future AI companions are set to significantly transform personal and professional landscapes. As forecasted by Microsoft AI CEO Mustafa Suleyman, these companions will go beyond mere task execution to offer profound emotional and contextual support. Imagine a companion that not only reminds you of meetings but also senses when you're stressed and offers calming words or suggests a break. By integrating advanced ambient awareness, these AI tools will be capable of seeing, hearing, and understanding human experiences in real‑time, creating an environment where technology intuitively aligns with the user's needs and emotions.
              These AI companions will not just react but proactively assist, driven by a framework of humanist superintelligence. Such companions are designed with the promise of safe, value‑aligned interaction, ensuring they enhance rather than replace human connections. They will likely integrate seamlessly into our daily lives, offering insights and assistance in a naturally evolving manner. This means that AI will become a partner for personal growth, continuously learning and adapting to our individual preferences and behaviors.
                The transformative potential of these AI tools extends into various sectors by automating routine emotional labor and offering new insights through emotional awareness. As noted in the article, AI companions could revolutionize mental health care by providing 24/7 emotional support, thereby mitigating anxiety and depression. Such an evolution could reduce the burden on healthcare systems while personalizing care more than ever before.
                  However, the evolution of AI companions brings to light significant considerations regarding privacy and ethical design. The ability of these systems to continuously access sensory data raises concerns about potential misuse and surveillance, as highlighted by the public's mixed reactions and debates over safeguards mentioned in the Indian Express article. Hence, the design and deployment of AI companions must include robust privacy protections and transparent operations to maintain trust and prevent dependency issues.
                    In the broader spectrum, AI companions could alter the fabric of human interaction and societal norms by providing constant presence and emotional support that may challenge traditional relationship dynamics. They promise to be both life navigators and emotional aids, emphasizing partnership and support. This future vision requires careful consideration of potential social impacts and ethical guidelines to ensure these companions contribute positively to society while preserving essential human relationships.

                      Philosophical Perspectives on AI

                      The advent of artificial intelligence has stirred profound philosophical debates about its implications for humanity. AI, particularly in the form of personal companions as envisioned by Microsoft AI's CEO Mustafa Suleyman, raises questions about autonomy, consciousness, and the nature of human interaction with machines. According to Suleyman's vision, these companions will possess a deeply embedded understanding of users' lives, potentially acting as empathetic extensions of the human self. This development invites an exploration of what it means to have 'humanist superintelligence,' a concept where AI aligns with human values without pursuing independent goals. Such philosophical musings challenge us to consider whether AI can truly empathize or if it merely simulates human‑like understanding.
                        One of the central philosophical questions surrounding AI is the balance between utility and ethical responsibility. While Suleyman's prediction of AI companions suggests a transformation in how we interact with machines, it also raises concerns about privacy and the erosion of individual autonomy. As noted in the original article, these systems are designed to support human connection rather than replace it, yet their pervasive presence could lead to new forms of dependency and control. This duality echoes classic philosophical discussions about technology's role in society: is it a tool for liberation or a mechanism of control?
                          Furthermore, the concept of an AI companion challenges our understanding of relationships and emotional bonds. If these AI entities can perceive and respond to human emotions with a high degree of sensitivity, as Suleyman envisions, what distinguishes these interactions from those between humans? As explored in philosophies of mind, the idea of a machine achieving true emotional connection remains contentious, drawing on debates about consciousness and intentionality. According to this perspective, the AI's role as an emotional support entity might redefine companionship, prompting a reevaluation of genuine human connections in an increasingly digital age.
                            Moreover, the rise of personalized AI applications as referenced by Suleyman touches on broader existential themes of identity and self‑conception. As AI becomes more integrated into daily life, it challenges the boundaries of selfhood and agency. Are we merely augmenting our capabilities, or are we beginning to blur the lines between human and machine? This contemplation is central to the philosophical discourse on AI, prompting society to reconsider concepts such as agency and free will in an era where AI could realistically act as an extension of human consciousness. As documented in Suleyman's predictions, these companions are envisioned to integrate seamlessly with human lives, raising questions about the evolving nature of human identity.

                              Public Reactions to AI Predictions

                              The recent predictions by Microsoft AI CEO Mustafa Suleyman about the rise of personalized AI companions have sparked a spectrum of public reactions. On one hand, many individuals are thrilled by the idea of having an AI entity capable of understanding and supporting them emotionally, akin to a sympathetic friend rather than a mere digital helper. This enthusiasm is particularly evident on social media platforms, where users have expressed optimism about the rapid development of AI technologies. Notably, some have drawn parallels between the prospective AI companions and existing innovations like Suleyman's previous project, Pi, which was designed to offer empathy‑driven support, highlighting a shift from task‑based tools to more contextually aware entities as reported.
                                Conversely, skepticism abounds, particularly regarding the ambitious timeline proposed by Suleyman. Some commentators question whether the technological advancements required for such comprehensive AI systems can realistically be achieved within five years. The concerns aren't limited to technical feasibility alone; practical issues such as battery life and the cost of regular upgrades also dominate discussions. For instance, jokes about needing daily charges and expensive updates are becoming a common refrain among those wary of the practicality of such ubiquitous AI companions as detailed.
                                  A significant portion of the discourse focuses on privacy issues inherent to an AI companion with ambient awareness, which requires constant sensory data collection through tools like cameras and microphones. This surveillance capability raises fears about the erosion of personal privacy and autonomy, even as such systems are designed to enhance human connections rather than replace them. Public discussions often allude to Suleyman's concept of "humanist superintelligence," which aims to align AI systems' values with human ethics, though many remain skeptical without concrete measures to ensure these standards as pointed out.
                                    In general, the potential of these AI companions is viewed as a double‑edged sword. Supporters appreciate the innovations that promise to provide round‑the‑clock emotional support and guidance, potentially addressing loneliness and offering new ways to navigate everyday challenges. However, critics caution against over‑reliance on AI, which could inadvertently undermine human social skills and connections. This dichotomy illustrates the broader societal debates surrounding the integration of advanced AI into daily life as observed.
                                      The mixed reactions underscore the complexity of Suleyman's vision, reflecting broader ethical and practical challenges that accompany significant technological shifts. While the dream of AI companions acting as emotional support tools excites many, the reality of its implementation is fraught with challenges that arouse cautious optimism and significant skepticism. This balancing act between innovation and apprehension continues to shape public opinion, spotlighting the importance of addressing both the benefits and the concerns as AI technology evolves according to analyses.

                                        AI Companions vs. Current AI Assistants

                                        The landscape of AI technology is on the brink of transformation as Mustafa Suleyman, the CEO of Microsoft AI, forecasts a future in which AI companions will dramatically differ from today's AI assistants like Copilot or ChatGPT. These forthcoming AI companions are envisioned to operate in a highly personalized and proactive manner, characterized by their ability to perceive, comprehend, and engage with users' environments in real‑time. Unlike current AI assistants, which primarily respond to predefined prompts, these companions aim to weave themselves into the fabric of daily life, offering not just task‑based solutions but emotional and life support as well. According to Suleyman's vision, these AI systems would be equipped with ambient awareness that transforms them from auxiliary tools into ever‑present companions.

                                          Connection to Mustafa Suleyman's Previous Work

                                          Mustafa Suleyman's work at Microsoft is a natural progression from his previous endeavors in the AI industry. Before his tenure at Microsoft, Suleyman co‑founded Inflection AI along with Reid Hoffman and Karen Simonyan. Inflection AI was known for developing Pi, a conversation‑driven chatbot designed to offer emotional support, which successfully amassed around one million daily active users. This background in developing empathetic technology speaks to Suleyman's vision for AI companions that are more than mere productivity tools; they are to be emotionally aware entities, understanding and supporting users in their daily lives much like Pi had done. According to a recent report, Suleyman wishes to expand this concept within Microsoft by creating AI companions that are not only responsive but inherently understanding of human emotions, challenges, and motivations.
                                            Suleyman's previous work laid the foundational principles for his current projects at Microsoft. His concept of 'humanist superintelligence' reflects his long‑standing commitment to creating AI systems that align with human values and ethical standards, prioritizing human connection rather than replacing it. This principle is directly informed by his work at Inflection AI, where human interaction and emotional intelligence were central themes. As highlighted, Suleyman's current focus on developing AI companions with ambient awareness and emotional intelligence draws heavily from his experiences at Inflection AI, where the emphasis on empathy and personalized interaction were paramount. His transition to Microsoft has enabled him to expand on these ideas, showcasing how his previous work continues to influence the cutting‑edge developments in AI technology.

                                              Privacy Considerations in AI Companions

                                              The advent of AI companions, as predicted by Microsoft AI CEO Mustafa Suleyman, poses significant privacy considerations. These AI systems are envisioned to possess a level of ambient awareness, enabling them to see, hear, and understand users' everyday lives in real‑time. This kind of pervasive integration naturally brings forth concerns about privacy, as the constant sensory data required for such functionality can lead to potential misuse if not properly safeguarded. According to Suleyman, while these AI companions are designed to serve as supportive friends rather than replacements for human interaction, vigilant attention must be paid to how this data is handled to protect user privacy effectively.
                                                The proposed AI companions' capacity to continuously monitor and assist users through sensory input presents both a technological breakthrough and a privacy challenge. Ensuring the data collected remains confidential and is only used for intended purposes is critical. It highlights the necessity for robust data protection and privacy policies that are transparent and secure. Suleyman emphasizes the importance of developing trust‑bound systems that ensure users maintain control over their information. However, detailed specific safeguards were not extensively covered in his statements, indicating an area that requires further innovation and regulation. For further insights into the AI companion concept, you can refer to this article.
                                                  AI companions that can observe users' environments through cameras and microphones will need to address the inherent privacy risks associated with such technologies. The notion of 'humanist superintelligence' as proposed by Suleyman implies a structure where AI systems are boundary‑contained and inherently safe, yet the absence of concrete privacy frameworks can induce skepticism among users. An AI's ability to offer personalized experiences while maintaining trust hinges on its transparency and ethical data use. This relates closely to the potential societal impact, where privacy concerns, if not addressed, could overshadow the benefits of having a personal AI companion, as discussed in reports on the subject like the one here.
                                                    Integrating AI companions into daily life challenges the conventional limits of personal privacy, notably through the required continuous listening and watching capabilities for effective interaction. This calls for a careful balance between technological innovation and ethical boundaries. The conversation around AI privacy is urgent, as these technologies, while promising enhanced emotional and social support, could lead to pervasive surveillance unintentionally if stringent privacy standards are not preemptively established. The debate thus continues on how to fully realize the potential of AI companions without compromising individual privacy, an issue highlighted by industry leaders like Suleyman and detailed in relevant analyses such as this one.

                                                      Safeguards Against AI System Misuse

                                                      In the rapidly evolving world of artificial intelligence, concerns about the potential misuse of AI systems are becoming increasingly prominent. As AI continues to integrate into more aspects of daily life, safeguarding against misuse becomes essential. Ensuring the ethical deployment of AI involves designing systems that are inherently aligned with human values. According to Mustafa Suleyman, creating AI companions with ambient awareness will necessitate thoughtful implementation to prevent violation of users' autonomy and privacy. These systems must be controllable and their applications contained to prevent unintended consequences related to surveillance and dependency.
                                                        To combat the potential for AI misuse, a robust framework of regulations and guidelines is essential. Such frameworks should address ethics in AI design, ensuring these technologies serve humanity and not vice versa. The EU AI Act, for example, classifies certain AI systems as high‑risk, mandating stringent audits to prevent emotional and psychological harm. In the United States, emerging bipartisan legislation, such as the AI Accountability Act, proposes implementing mandatory shutdown mechanisms for AI systems that might pose a threat to users. These legislative efforts underscore the importance of international collaboration in establishing comprehensive regulatory standards, fostering an environment where AI developments can occur safely and democratically.
                                                          The potential misuse of AI systems also calls for active involvement from both developers and users to ensure these technologies are used responsibly. Developers must integrate ethical considerations into the AI life cycle, from conceptualization to deployment. Meanwhile, users should be educated on the capabilities and limitations of AI companions to make informed decisions. Public discourse around AI's role in society emphasizes the need for transparency and accountability, encouraging developers to create systems that are understandable and accessible to all. As AI continues to evolve, it is crucial to maintain a commitment to these principles to prevent misuse that could undermine public trust and harm societal cohesion.

                                                            Potential Replacement of Human Relationships by AI

                                                            The introduction of advanced AI companions predicted by Microsoft's AI CEO, Mustafa Suleyman, could potentially transform human relationships by providing a form of interaction and support that closely mimics or even surpasses human expectations. These AI systems, designed to see, hear, and understand their users in real time, may significantly impact how people connect with each other. As Suleyman suggests, they are not just tools but designed to be deeply integrated companions that help navigate life's challenges. This raises questions about whether such AI can become substitutes for actual human connections, offering emotional companionship that some may find easier than navigating complex human emotions.
                                                              Supporters argue that these AI companions could mitigate loneliness and provide valuable companionship, particularly for individuals who are isolated or have difficulty forming human connections. The potential for these systems to offer 24/7 support in a manner that feels personal and genuine might appeal to those in need of constant reassurance and companionship — effectively acting as digital friends. This mirrors the evolving landscape of social interactions fostered by digital communication platforms, where "friendship" often transcends physical presence, according to experts.
                                                                However, there are substantial concerns about the long‑term effects on societal norms and individual social skills. The reliance on AI companions might erode traditional social skills and the ability to connect with others face‑to‑face, a skill crucial to personal and professional interactions. Furthermore, these AI interactions raise privacy and ethical questions, particularly regarding the data collected by these systems to function effectively. Critics argue that they could inadvertently increase social isolation by providing a convenient escape from the demands of real‑world relationships. Despite Suleyman's assurances that these AI companions are meant to support and not replace human connections, as noted in his vision, the balance between companionship and isolation remains delicate.
                                                                  The cultural shift towards AI‑facilitated relationships might redefine companionship, emphasizing emotional support tailored to individual preferences over collective, community‑based interaction. This could lead to a form of emotional dependency on AI, as these systems become more adept at recognizing and responding to human emotions than humans themselves, who are subject to their own biases and limitations. Such dependency raises questions about the future of human etiquette and empathy, as AI companions become a part of everyday life. As debates on the integration of AI into personal lives continue, it becomes imperative to foster AI designs that value human connections and encourage rather than replace them.

                                                                    Economic Implications of AI Companions

                                                                    The integration of personalized AI companions into the economic landscape promises significant transformation, akin to previous technological revolutions. As outlined by Microsoft AI CEO Mustafa Suleyman, these companions will be more than mere tools—they will function as emotional and intellectual partners, facilitating decision‑making and emotional labor in unprecedented ways. This shift could drastically change the nature of jobs, particularly those in sectors like customer service, therapy, and personal assistance, potentially leading to job displacement as AI automates routine and emotional aspects of such roles. However, it also opens new economic opportunities in the development and sale of AI hardware, such as wearables, and the creation of subscription services tailored to deliver enhanced emotional and contextual insights.
                                                                      Despite the potential for economic disruption, the introduction of AI companions could catalyze growth in specific industries. According to recent forecasts by Gartner, the market for AI hardware and associated services might surge, reaching an estimated $100‑200 billion annually by 2030. This economic boon could benefit tech giants like Microsoft, who are spearheading these innovations, while simultaneously creating demand for businesses focused on data privacy and ethical AI regulation. Such economic conditions will likely lead to a landscape where economic and ethical governance must innovate alongside technological advancements to ensure equitable access and utility across different socio‑economic groups.
                                                                        There are significant concerns regarding the equity and digital divides that AI companions might exacerbate. As noted by the World Economic Forum, a potential shift towards freemium models—where core AI companion functionalities are free, but advanced emotional intelligence services are premium—could widen the gap between affluent users who can afford such premium options and low‑income individuals who cannot. This divide poses a risk of further entrenching inequalities, as wealthier users gain productivity and emotional advantages that are inaccessible to others, potentially leading to increased socio‑economic disparities.
                                                                          The monetization strategies for AI companions raise questions about accessibility and the deepening of digital inequalities. In a landscape dominated by pay‑to‑access emotional support and insights, affluent users are likely to benefit disproportionately from AI's capabilities. This trend, highlighted by various reports, suggests a future where economic access dictates the level of personal and professional assistance one can receive from AI, further polarizing socio‑economic groups.
                                                                            Overall, while AI companions hold the potential to revolutionize economic paradigms and open up new markets and opportunities, they also pose significant challenges. To mitigate the risks associated with economic inequality and job displacement, it will be crucial for policymakers to implement strategic regulatory frameworks. These frameworks should focus on ensuring fair distribution of AI technologies and services, protecting individuals from exploitative practices, and fostering an environment where both AI advancement and human welfare are equally prioritized. A balanced approach will be essential to harness the full economic potential of AI companions while safeguarding against their unintended negative impacts.

                                                                              Social Implications of AI Companions

                                                                              Moreover, the cultural and psychological impact of AI companions could challenge existing norms around companionship and privacy. As these systems evolve to become emotional confidants, they have the potential to blur the line between genuine and artificial intimacy. This could result in shifts in societal values related to privacy and the integrity of personal spaces. The ethical considerations surrounding constant surveillance capabilities are emphasized in discussions within various articles. The widespread adoption of AI companions would necessitate robust safeguards to prevent misuse and ensure these systems genuinely enhance rather than hinder human connection.

                                                                                Political and Regulatory Considerations

                                                                                Mustafa Suleyman's vision for personalized AI companions, as outlined in his predictions, highlights significant political and regulatory considerations. The deployment of such technology inevitably raises concerns around surveillance and data privacy, especially since these AI systems are expected to have constant access to users' real‑time auditory and visual data. This capability poses the risk of mass behavioral profiling if mishandled, making regulatory oversight crucial. According to this report, the proposed systems would need to navigate within a robust legal framework to ensure compliance with international privacy laws such as the EU AI Act, which mandates strict audits for high‑risk systems, including emotional recognition technologies.
                                                                                  In the United States, legislative measures like the proposed AI Accountability Act reflect the seriousness with which these concerns are being addressed. This law, if enacted, would require AI systems to incorporate mandatory "kill switches" allowing users to disable sensory data collection, a safeguard against potential misuse by the state or malfeasance by private companies. The geopolitical implications further compound the complexity of deployment. As the U.S. and China compete for AI dominance, Microsoft's developments in this area could cement Western influences on global AI standards, balancing innovation with ethical considerations.The Indian Express article elaborates on how countries are enacting their own specific data sovereignty laws, potentially leading to market fragmentation.
                                                                                    Critically, the potential public backlash against technologies that might infringe on personal privacy cannot be underestimated. Historical precedents, such as the 2025 public protests against Meta's AI‑enabled glasses, highlight the societal discomfort with constant data collection and the erosion of personal privacy. As the Times of India reports, such public reactions demand that technologists and policymakers come together to draft treaties or regulations akin to those governing nuclear non‑proliferation to responsibly manage the introduction of superintelligent AI systems.
                                                                                      The timing of these developments is also an intense area of debate. While some predictions suggest broad adoption of personal AI companions within a decade, skepticism remains about the feasibility given current limitations in AI battery life and privacy technology. Experts clash on the expected rates of implementation, with some citing ethical and technical bottlenecks as significant barriers to rapid uptake. Nevertheless, as pointed out by Microsoft's own communications, the emphasis remains on creating AI systems that are value‑aligned and controllable to mitigate the risks and ensure public trust.

                                                                                        Future Uncertainties and Timelines

                                                                                        In the rapidly evolving landscape of artificial intelligence, the future presents a kaleidoscope of uncertainties and possibilities. According to Mustafa Suleyman, CEO of Microsoft AI, the promise of a personal AI companion for every individual is a vision set to unfold over the next five years. This timeline, however, is part of an intricate web of technological advancements, societal readiness, and the evolving relationship between humans and AI.
                                                                                          The journey towards integrating AI companions into daily life is marked by significant milestones, but it also carries inherent uncertainties. The projected five‑year period reflects optimism in technological advancements, such as improvements in real‑time data processing and ambient awareness capabilities. However, it also raises questions about the readiness of supporting infrastructure and the adaptability of users to such transformative changes.
                                                                                            While the technical feasibility of creating AI companions that can see, hear, and understand our lives in real time seems within reach, as stated by Suleyman, the broader implications involve navigating privacy concerns and societal acceptance. The timeline, therefore, not only includes technological hurdles but also ethical and regulatory factors that could influence the pace and direction of deployment.
                                                                                              Beyond the technological readiness, there is the looming question of societal and regulatory acceptance. Ensuring that these AI companions align with human values and ethics is paramount. The concept of "humanist superintelligence," as highlighted in the backdrop of Suleyman's announcement, suggests a framework where AI development is centered on human‑centric goals and cautious containment. This underscores the importance of regulatory timelines which could either accelerate or decelerate progress based on public trust and compliance issues.
                                                                                                Furthermore, public reception of this technology reveals a spectrum of anticipation and skepticism. While there is excitement about the convenience and support provided by AI companions, concerns about privacy, reliability, and the potential costs associated with adoption cannot be understated. These uncertainties underline the key factors that will shape acceptance levels and future timelines, as stakeholders balance innovation with prudence.

                                                                                                  Share this article

                                                                                                  PostShare

                                                                                                  Related News