Exploring Companion Chatbots and Human-AI Relationships

Companion Chatbots: Innovation, Ethics, and the Evolution of Human-AI Relationships

Estimated read time: 1:20

    Learn to use AI like a Pro

    Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo
    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo

    Summary

    In this thought-provoking livestream organized by All Tech Is Human, experts from different fields come together to discuss the burgeoning world of companion chatbots and their impact on human relationships. This discourse delves into the complexities of innovation, the ethical considerations surrounding emotional attachments with AI, and the future implications for society. The panelists, including researchers, policy advocates, and AI ethics specialists, share their insights on the potential risks and benefits of companion chatbots, emphasizing the need for intentional design and cultural awareness to safeguard humanity in the digital age.

      Highlights

      • Henry Chevlin emphasizes uncertainty in AI impacts, drawing parallels with the unforeseen effects of social media. 🤔
      • Sam Heiner argues for proactive policy-making before AI companions' societal impact mirrors that of unregulated social media. 🚦
      • Kim Malfacini discusses OpenAI's research on the emotional use of AI, stressing empirical data for responsible development. 📚
      • Panelists acknowledge both opportunities and dangers in AI companions, underlining the need for ethical oversight and design. 🔍
      • The conversation highlights a blend of optimism and apprehension from different sectors about the future of AI companionship. 🌟

      Key Takeaways

      • Companion chatbots are becoming more conversational and emotionally aware, leading to deeper human interactions. 🗣️
      • Ethical considerations are crucial as companion chatbots can potentially exacerbate loneliness or replace human relationships. 🧠
      • Balancing AI's potential to enhance lives without eroding human connections requires intentional design and policy. ⚖️
      • The current lack of empirical research necessitates informed decisions for responsible technology usage. 📊
      • Open dialogue and cultural movements are vital in shaping a future where AI augments, not replaces, human interactions. 🌐

      Overview

      The All Tech Is Human livestream addressed the evolving role of companion chatbots in modern society. These AI-driven tools are developing increasingly advanced conversational and emotional capabilities, prompting users to form significant attachments. The panel explored the ethical implications and potential societal impact of such technology, emphasizing the need for informed and responsible development.

        A primary concern highlighted was the potential for companion chatbots to exacerbate loneliness or act as substitutes for human interaction. The panelists stressed the importance of ethical design and policy to avoid repeating past mistakes made with social media. Researchers and advocates called for a balanced approach that allows AI to enhance human lives without undermining fundamental social connections.

          Finally, the discussion underscored the urgent need for empirical research to better understand the impact of AI companions. By fostering an open dialogue involving diverse voices, the objective is to ensure these technologies support and enhance human experiences rather than replace them, steering towards a future where AI augments human capabilities responsibly.

            Chapters

            • 00:00 - 01:30: Introduction to All Tech is Human The introduction of this chapter welcomes listeners to the All Tech is Human live stream series. The host, Sandre Khalil, who is the associate director of All Tech is Human, introduces the series as a bi-weekly event focused on discussing challenging issues at the intersection of technology and society. The goal of these conversations is to contribute to shaping a responsible tech future.
            • 01:30 - 04:00: Community Involvement and Upcoming Topics This chapter focuses on the importance of community involvement in building a sustainable and responsible technology ecosystem. It highlights efforts to connect diverse voices and surface interdisciplinary insights, providing a welcoming space for individuals from various fields such as technology, policy, academia, advocacy, and the arts. The chapter emphasizes the commitment to fostering a community where participants can easily engage and grow. Additionally, it mentions bi-weekly live streams as a means of sparking interest and maintaining active discussions in the responsible tech space.
            • 04:00 - 05:00: Introduction to Today's Topic on Companion Chatbots The introduction chapter discusses how to get involved in the vibrant community surrounding companion chatbots. It highlights the active Slack community with over 12,000 members from across the globe and various sectors. Additionally, it mentions the opportunity to attend both curated in-person gatherings in the US and virtual discussions, emphasizing networking and connection within the field.
            • 05:00 - 09:00: Introduction of Panelists The chapter titled 'Introduction of Panelists' discusses the increasing importance of collaboration among individuals in the field. It highlights the possibility of organizing local ATIHX meetups globally, which are independently organized by members of the Altech Human community. The purpose of these meetups is to bring discussions closer to community levels, emphasizing local engagement. The topic at hand is focused on 'Companion Chat Bots' and the concept of 'unjustified attachments to AI', as AI systems become more prevalent.
            • 09:00 - 22:00: Approach to AI Companions and Design Considerations The chapter explores the evolving nature of AI companions, focusing on their conversational abilities, persistence, and emotional intelligence. It discusses the increasing tendency of users to form deep connections with AI tools and the potential consequences of such attachments. The chapter also addresses the implications of creating emotionally intelligent AI, emphasizing the need for accountability and transparency in these innovations. It suggests a dialogue with experts who are contemplating these issues to foster a better understanding and responsible development of AI companions.
            • 22:00 - 37:00: Potential Pitfalls and Guardrails for Companion Chatbots The chapter begins with the introduction of a live stream discussion moderated by Rose Gingrich, a graduate futures social impact fellow at Princeton University and a PhD student and AI researcher. Viewers are encouraged to participate by sharing their thoughts in the chat for a Q&A session at the end. The focus of the chapter is on potential pitfalls and guardrails related to the use of companion chatbots.
            • 37:00 - 44:00: Final Thoughts and Key Takeaways on Companion Chatbots The chapter begins with the moderator expressing gratitude and excitement for the panel discussion on companion chatbots. Several experts from diverse sectors, including Dr. Henry Chevlin, Kim, and Sam, are introduced as panelists. These panelists will share their perspectives on the evolving landscape of companion chatbots. The introductory remarks set the stage for a comprehensive discussion about the implications, challenges, and advancements surrounding companion chatbots in various fields.

            Companion Chatbots: Innovation, Ethics, and the Evolution of Human-AI Relationships Transcription

            • 00:00 - 00:30 Hello and welcome to another edition of AllTis Humans live stream series where every other week we dig into thorny tech and society issues in meaningful conversations shaping our responsible tech future. I'm Sandre Khalil, associate director of Altech is human. If you're new here, welcome. Altech is human is a nonprofit organization
            • 00:30 - 01:00 committed to building the relational infrastructure sustaining the responsible tech ecosystem. We connect diverse voices, surface interdisciplinary insights and help people break in and grow within the responsible tech ecosystem. So whether you're coming from tech, policy, academia, uh advocacy work, or creative fields, there's definitely a seat for you at this table. Our live streams happen bi-weekly and are just one way that we spark the interest in
            • 01:00 - 01:30 conversations with the community. If you're looking to get more involved in our community, you can join our very vibrant Slack community. We have over 12,000 people across over a 100 countries around the world and it's filled with folks really in this discipline and attack approaching the tech issues from different sectors. You can also attend one of our upcoming gatherings. We have curated US-based convenings and also virtual discussions like these. Uh, and they're really designed for this sort of connection and
            • 01:30 - 02:00 collaboration among people in the field. And finally, you can bring the conversation local by organizing an ATIHX meetup. Those are independent independently organized meetups around the world by Altech Human community members. So, you can bring that to your city or community and uh convene together. So today's topic is a fascinating and important one. Companion chat bots and sort of unjustified attachments to AI. As AI systems become
            • 02:00 - 02:30 more conversational, more persistent, and maybe even a little emotionally intelligent, we're starting to see a rise in users forming deeper connections with these tools. So today, we're going to discuss what happens when these attachments can go too far. uh what implications exist when we're creating AI like this and how do we create more accountability and transparency in the conversation around them. So, we'll be diving into these questions with a group of folks that have been thinking really
            • 02:30 - 03:00 deeply about this. Be sure to drop your thoughts in the chat for our Q&A portion at the end of the discussion. And now I'm pleased to bring our moderator for today's live stream on uh Miss Rose Gingrich. She is Altech as humans grad futures social impact fellow at Princeton University and a PhD PhD student and AI researcher. So Rose, I'll bring you on now. How's it going? Oh, you're you're muted, Rose. I
            • 03:00 - 03:30 unmuted you. Thank you. Thank you so much, Sandra, for the introduction and thank you everyone who is online tuning in. And so what I'm going to do real quick is just give an introduction for all of our panelists. We're so excited to have them here today. We have Dr. Henry Chevlin, Kim, and Sam all joining us to provide their input from different sectors on companion chat bots and this constantly changing environment with
            • 03:30 - 04:00 these AI tools. So to begin for each of our panelists, I would love for you to tell us a little bit about you. Tell us where you're coming from, what industry you're in, and the capacity in which you engage with companion chat bots. And also, as a as a side note, add kind of a word that describes how you think about companion chatbots, whether it be apprehensive, excited, hopeful, or concerned. So, I'll turn it over to Kim
            • 04:00 - 04:30 to start. Thank you, Rose. Um, and pleasure to be connected with all of you there virtually. I'm thrilled to be a part of this conversation. So again, I'm Kim Malfacini. I work at OpenAI on our product policy team. That team does a few different things. Um part of which is we help work across the company to ensure that products and models OpenAI deploy are um built responsibly and shipped safely. I also work on usage
            • 04:30 - 05:00 policies. So once our policies are out, once our models and products are out in the world, what are the policies that govern their use by users and developers? Um, alongside that work, I just wrapped up a master's in studies at the University of Cambridge working very closely with Dr. Henry Chevlin uh in the area of AI ethics and society. Um, and I'll just make a very brief plug for that program if anyone's interested in thinking about it. uh I'm very glad to
            • 05:00 - 05:30 have done it and would highly recommend it. As part of that uh course, I wrote a dissertation on the impacts of human AI relationships on human human relationships, which is an area of really deep interest for me. That paper notably was not focused on the generalpurpose chat bots like a chat GPT or Claude and more so focused on the replica Nomi type very kind of um intended for companionship apps. Um and
            • 05:30 - 06:00 I'm very excited to share that research was recently published. Very excited to share what I've learned through that process. Excellent. Thank you so much Kim. Oh and my word. Yes. Um, I'm gonna I'm gonna pick from the options you um shared with us and and my one word will be concerned. Gotcha. Thank you so much, Kim, for that. And I'm going to go ahead and pass it over to Henry for your introduction. Thanks, Rose. And thanks, Sandra, as
            • 06:00 - 06:30 well for hosting us here today. And um, thanks Sam and Kim for joining us. I'm really excited to be here. So, uh, I'm Henry Chevlin. I'm the associate director of the Levere Center for the Future Intelligence at the University of Cambridge. uh CFI for short is the university's main AI ethics theory policy uh institute. Um and I'm a philosopher, AI ethicist. I actually cut my teeth working on issues in the cognitive science of AI primarily um and cognitive science of uh also um animal
            • 06:30 - 07:00 minds basically trying to understand uh all forms of non-human cognition and how to think about them. Um and my big fascination at the the outset was um uh AI consciousness. But um having been completely swept along with the exciting things happening in generative AI and large language models over the last four years, uh a major focus of my current research is um is AI companions or uh a term that I've been trying to make happen uh social AI. um because I think
            • 07:00 - 07:30 that captures the kind of the fact that the line between AI companion apps versus more AI language assistance isn't always super clear. So social AI is a it's kind of more like a a mass now than account now and I feel it captures these kind of diverse use cases and these uh diverse ways in which this um human AI relationships are are rapidly dawning on us. Um yeah and I'm delighted to to be here with Kim and Rose, both who are both whom I know very well. I'm delighted to meet Sam. I guess my word I
            • 07:30 - 08:00 was just um I was just trying to think what a good word would be and I asked Paul and chat GPT just now um and I couldn't get uh I wanted something like apprehensive but a bit more neutral sort of sense a sense of both massive positive potential and massive negative potential and the best the best I got is expectant which you know may sound like I'm pregnant um but like I'm still I'm still going to go with expectant as as as the closest to capturing where I'm at with I excellent word. Thank you, Henry.
            • 08:00 - 08:30 Uh, and thanks to Claude and Chetchy PT for helping you think of that. Uh, okay. Thank you so much for the introduction and telling us a little bit about your background and the capacity you work with companion chatbots. And lastly, I'll hand it over to Sam for the introduction. Hey everyone, my name is Sam Heiner and I'm the executive director and co-founder of the Young People's Alliance. I'm currently a senior at UNC Chapel Hill and I graduate in about two weeks after which I'll be going full-time with YPA. So, super excited for that. Um as an organization our mission is to essentially be the AP
            • 08:30 - 09:00 for young people and broadly empower young people. Uh but through that work we found that responsible technology really has been a focus area for us starting with social media just because uh in terms of the people in our organization we found that we had these very similar experiences of the harms that we had experienced and wanted to do something about it. And now that's expanded to AI companions where we see these risks of uh further exacerbating the loneliness crisis that's already been caused by social media uh and want to think of ways that we can preserve
            • 09:00 - 09:30 our humanity in the digital age and use AI for good. Um in terms of the work that we do, we do policy advocacy at the federal and state levels in the US. We also do uh organizing across 30 campuses in North Carolina, Pennsylvania and Tennessee. And then we do a policy research on various issues as well. Um, in terms of the word that I think of when it comes to AI, I think for AI in general, I would say excited. I think there are many transformative uh applications that like we shouldn't
            • 09:30 - 10:00 necessarily dismiss even though we're taking things from a more responsible tech lens. At the same time, when it comes to AI companion chatbots in particular, I'm extremely scared would be my word. I think that uh it's going to be very hard to build those correctly and put the right guard rails in place so that those are actually designed to make us better people rather than to um you know provide like short-term benefits at a long-term expense. But uh looking forward to this conversation and excited to see everyone. Thank you Sam.
            • 10:00 - 10:30 And so you are both excited and scared. So we have a mix of words here when it comes to companion chat bots which I think is fitting for the landscape that we're in currently. So to start for questions and during this time if you have a burning response to any of the questions that I pose feel free to raise your hand and otherwise I will just go through one by one but I want to know how are you and others on your team in your sector currently thinking about the design deployment or oversight of AI
            • 10:30 - 11:00 companions and what sorts of conversations and priorities are shaping your approach to these questions. All right, I'll go ahead and start with Sam. All right. Um, yeah, I think for us right now, there's sort of two conversations at play. It's obviously very early, so we're still um determining what the exact path is. And I think this question is up in the air for a lot of the civil society community
            • 11:00 - 11:30 right now. Um, but in general, I think the the biggest goal is just to say with these like AI companions like replica, character.ai, AI and others that are clearly, you know, claim to be marketed as like loneliness reducing tools, but in practice are designed to cater to lonely people and uh encourage them to use the platform as some sort of like relief from loneliness without giving them the tools they might need to build relationships in real life. And we fear and I think a lot of research shows this as well potentially atrophying those ability to build relationships when
            • 11:30 - 12:00 you're talking to somebody who uh is or a chatbot who is sort of sick of psychopantic and always agreeing with you. Um that's our concern. And so at the very least we want standards to be implemented to say that if this bot is built to be an AI companion, it should be built with these therapeutic standards in mind. It should be regulated as a a medical device, for example, and be required to show efficacy at actually reducing loneliness or making people happier in the long term before it's given to consumers.
            • 12:00 - 12:30 That's a strand one. Strand two, I would say people are very, excuse me, people are very concerned about losing their humanity in the age of AI. Like it's very easy to imagine a world in 20 years where you work uh with like AI companions all day to get your work done and then you door dash food and you never see the person or the automatic AIdriven car that delivers the food. Um and then you talk to your AI companion at night and there's just no human
            • 12:30 - 13:00 interaction. That's feels like a very real future um and very scary one as well. So that's something that we're thinking about too is more broadly how do we not just say like this is the specific policy but how do we create a cultural movement that can really define what we want our humanity to look like in the age of AI. Sam, you bring up some excellent points and the point about atrophying of certain skills is something that Kim talked about in her recent paper. Um Kim, do you have anything to add or um
            • 13:00 - 13:30 give us information on for how your team or your sector is thinking about these same sorts of questions? Absolutely. I won't pretend to speak for the entirety of the sector. Um I'll say outside of my open AI capacity, uh I think I'm part of what I would call an informal network of of folks in academia broadly that are investigating these these questions. Right? All of us here, very likely many of those who are tuning in, just having this conversation, fostering the
            • 13:30 - 14:00 discourse on it, drawing attention to it. All of that is part of uh identifying both the problems, defining what problematic use looks like as well as articulating the positive vision. Like Sam said, if we if we kind of can't have an idea in our minds of how this technology is going to serve us, well, uh Sherry Turkal uses the term cherish humanity alongside pursuing AI which I think is is absolutely beautiful. Right.
            • 14:00 - 14:30 So part of it is is absolutely driving discourse on and um and fostering you know kind of interest in in this topic on the open AI side of the house. Open AAI has recently put out a study alongside MIT looking into what we are calling a effective use. So use of chat GPT that that is kind of emotional in some form or fashion. And that runs a really really wide gamut. Um that study which I hope maybe we can share in the comments of of this post. Um that that
            • 14:30 - 15:00 is one of the ways in which Open AI and of course then our partners at MIT were trying to start to answer empirically the question of what are the impacts of emotional uses of these tools? What are they over you know a a period of time? How do how does usage change? How do people how do people change? What is what are the impacts to loneliness to social motivation? And what I would say right now broadly is that we know so
            • 15:00 - 15:30 very little about this space. And um you know despite the wonderful work of of folks on this call and many others, I think my kind of biggest interest right now is in driving more interest in this space such that we can further dig into um empirical analysis to inform responsible development. Absolutely. I second that. Being a researcher myself, I'm always on board with more empirical evidence in this space. So this is a call to those in the
            • 15:30 - 16:00 audience who are interested in this research to also get on board um and provide information to researchers on what needs to be addressed. So with that in mind, passing it over to Henry who has done a lot of thinking in this space. Let's hear your thoughts. Awesome. Thank you Rose. Okay, so just to uh quickly promote a few a few of the things I've done in the space. Um, so, uh, I've had two publications in the last couple of years trying to drive an
            • 16:00 - 16:30 interdicciplinary ethics conversation around social AI. Um, maybe I can I'll drop links to those in the chat shortly. Um, I'm also very excited to be leading um, a unit of kind of an online journal for Oxford University Press at the moment um, as part of their AI intersection series, an interdicciplinary uh, online publication series and I'm leading their AI and relationships division. We've commissioned over more than 20 papers in the last 12 months um on uh how humans relate to AI and um you know we have a
            • 16:30 - 17:00 wide variety of topics everything from the way human AI relationships are portrayed in fiction to empirical work uh looking at sort of uh some of the kind of psychological factors that influence people's use to more philosophical issues about the value of these things. Uh and that's still running. We're going to be doing another big call for papers soon. So, anyone who's interested in writing in this uh about this uh on this topic and if you're looking for a venue uh for publication, keep an eye out for that.
            • 17:00 - 17:30 That's the um Oxford University Press AI intersection series. Um I'm also writing a book for Cambridge uh Cambridge University Press as part of their elements series um on ethics of social AI trying to create a kind of initial roadmap for navigating the ethical issues in this space. Um, so that's that's the kind of pitching uh out the way. Um, I wanted to completely agree um with what everyone said, but I think particularly Kim um I completely agree that this is an area where we just know
            • 17:30 - 18:00 so little and I think it's really important not to do too much armchair theorizing about the impact of very diverse set of tools and apps on society or individuals. So one little analogy I like to suggest is uh if you went back to 1970 um and told people um you know we've got these two technologies that really become mainstream. One of them is video games you know allows people to experience violent uh violent combat you
            • 18:00 - 18:30 know shoot people in the head online uh any any kind of any kind of extreme extreme graphic content you can you want. Um, and the other is this um, platform that allows people to share ideas and have conversations um, with strangers from around the world. Um, which of those do you think is going to have more serious negative social impacts? Right? It's like clearly video games sound like they're going to be so bad for us. But it turns out like the data on video games is really equivocal. Uh, certainly we don't see any strong correlations between video game usage
            • 18:30 - 19:00 and real world violence. Whereas social media is okay, there's still some controversy, but I mean obviously has some very negative effects, particularly in teens. And I think, you know, maybe maybe some people got lucky and called that in advance, but I think um uh I think it's the something it's very hard to draw um make concrete claims about without actually getting some more data. Now there is a growing body of data on this um uh to which Rose and Kim have contributed um
            • 19:00 - 19:30 uh and I think in moving actually I'll just quickly flag one thing that was a big surprise to me um I discussed in one of my papers is that the despite some of the you know very serious headlines and negative incidents we've seen around social AI the typical user experiences at least as far as we can tell seem to be predominantly positive um and seem to uh Again going on the limit data we have seem to actually improve people's social relationships. But there are some limitations with the data we have
            • 19:30 - 20:00 already um because this is such a new field. Most studies currently are cross-sectional studies. They're not longitudinal studies. They rely on self- selected subjects. They rely on self-report measures. You know life satisfaction judgments, self-reported well-being. So I think what we really need now is uh high powered longitudinal studies ideally where you assign people randomly to conditions of you know go away go away and talk to replica for 6 months if you can get that past the review board um and really seeing what
            • 20:00 - 20:30 causal impacts these things have on people's relationships rather than relying on relying on self- selection and I think it's only when we start getting that uh that we'll um we'll get a better sense of how to design these systems in a way that um does have positive consequences and or at least avoids the most serious negative consequences. Thank you, Henry. And well, I just completed a study where I randomly assigned people to interact with a companion shop off for 3 weeks. So those results will be out within the next couple of months. But we definitely need
            • 20:30 - 21:00 more of this data. Similarly layer to social media and these other sorts of digital technologies that Sam and his advocacy is quite familiar with in terms of we need more research and we need to um establish our advocacy within research to understand what sorts of next steps that we need to take. So with that in mind go ahead Sam. Yeah, thanks. Um, yeah, just to uh jump in there a little bit, definitely I think that need for more research is very apparent. Um,
            • 21:00 - 21:30 and props to all of you on this call for doing that research and making that happen because that really fuels our work. Um, one thing I would um note though is that I do think in some cases we need to move faster rather than sort of saying all right, we got to wait to see what the research says because while we can't, you know, necessarily predict entirely what the future is going to look like, we can follow the incentives. And so like with social media for example um when you add in manipulative recommendation algorithms like I think it's was pretty clear to see where the
            • 21:30 - 22:00 end of the road was that we'd have some massive issues um when it came to you know loneliness uh anxiety depression and things like that. Uh by the same token if these companies are incentivized to keep people on the platform for as long as possible or to make people emotionally dependent then I think we very quickly run into a future that looks a lot like social media um but even worse. Um, so I think that's that's the one area where from a policy perspective, I think it's also important that we move fast because we've seen it with social media is that once these companies became uh big tech, you know,
            • 22:00 - 22:30 once they started being called that, it was a lot harder to regulate them because they're spending hundreds maybe thousands of times of what we're spending on lobbying against the the regulations that are needed. Uh, and this is a, you know, cultural moment where we could actually do something about AI companions before it goes off the rails in the same way. You make a great point Sam and we also have the voice of the public right so what the public wants with respect to these tools that's also a very valuable key point in
            • 22:30 - 23:00 this discussion and not just the research as well so with this in mind given the potential pitfalls of companion chatbots I want to pose the question of what pitfall do you think is most significant or comes first to mind and what sorts of guard rails might you suggest to mitigate the risk of this. So I'll jump in here and say one thing
            • 23:00 - 23:30 that is I think increasingly apparent and this is this is uh a finding from the OpenAI MIT study that that most people in the chat GBT context apart from the you know specific companion AI context most people actually aren't using chat GBT for emotional discussion. It's not a tool that is built specifically for that. Um, but we we found a long tale of users that for whom greater usage, longer duration of usage
            • 23:30 - 24:00 was associated with more emotional discussion. Now, that study didn't look at causality in any way, but there's an association there that I think also feels intuitive. And I think it was brought up as well that there uh I think Sam you touched on the loneliness epidemic. There's you know possibly a scenario wherein um people are coming to these tools looking for a certain something um and if it's if it is
            • 24:00 - 24:30 companionship well then they're they're likely to find it. I think what I'm getting at is that there's a possibly a you know a subsection of people for whom this technology poses greater tech greater dangers by virtue of vulnerabilities that they have. I think about it a little bit like health uh co-orbidities, right? If you're coming to this tool and you've got a pre-existing condition, if you will, um isolation, loneliness, you know,
            • 24:30 - 25:00 possibly a series of mental health challenges, you may be more vulnerable than the average individual. Um now, I don't have an immediate uh solution to that. you know, companies very likely don't have information on, you know, baseline levels of loneliness for their users, but I I'm increasingly interested in ways in which this technology can help reflection among individuals and
            • 25:00 - 25:30 that could look that, you know, that could pose benefits in a bunch of different ways. There's very likely a lot of AI coaching apps already doing this. But if if we were to give people kind of an indication of hey you've you've now been using the the chat for this long or you know hey your your usage has really crept up over the last few weeks. I'm an avid user for instance of the Apple app screen time limiter and I find it enormously beneficial that kind of you know preserving of user
            • 25:30 - 26:00 autonomy. I set the limits. I define the apps and the time. Um, so I'm preserving my autonomy, but it's also it's also kind of built into the technology. It's it's saying, "Hey, in the moment, are you doing what the Kim of tomorrow wants, the Kim of yesterday established? How can we use this technology to force reflection um against our own goals?" Great. Uh, if I can hop in here. Yeah.
            • 26:00 - 26:30 Um, I completely share Kim's worry about the uh the idea that there might be a small subset of users for whom this technology is not fun or does not contribute positively to their lives but can be incredibly destructive and I think identifying who those users are. I think the same is probably true of other technologies like I think we all know people whose brains have been destroyed by social media. Um uh but I think probably there are greater greater acute
            • 26:30 - 27:00 risks for subsets of users um in the case of social AI compared to other um other dig digital um tools. I guess a broader category of concern I have uh is the impact of social AI on young people. And this is something it's just there's so little data on uh currently. Um I mean I just think back to when I was 14. I was uh you know in Geio Cities chat rooms and uh having conversations with people all around the world and I loved you know the um freedom and opportunities created by the internet um
            • 27:00 - 27:30 like early 90s internet and it was um and I can totally imagine that if I if I was a 13year-old today I would be obsessed. I would be so easil easily sucked in to social AI platforms. And I guess I I find that particularly um concerning, not because there's necessarily anything bad with wrong with it. Um you know, like I said, many of my best experiences, best social
            • 27:30 - 28:00 experiences in my teens were with, you know, people I never got to meet in real life. Um people I met in hobby hobbyist chat rooms, for example. Um, but I think just because it's such an important developmental period for people's minds and social identities, the impact of social AI on uh on that kind of critical window and how that might shape people's lifelong um attitudes towards others, I think is something that I I'm quite concerned about, particularly given that, you know, this is very much the kind of usage that's uh that's less visible. Um, and uh it would be good to
            • 28:00 - 28:30 get a sense as to just sort of how popular um tools like replica are are with young people. Um, that said, I guess one just to sort of balance this a little bit, you know, I'm reminded one of my favorite Mark Twain quotes, which is nothing so needs reforming as other people's habits. And um, I think this is particularly true when you're dealing with technologies that strike people as a little bit weird or a little bit uh, low status or a little bit um, nerdy or a little bit outside the mainstream. Um,
            • 28:30 - 29:00 and you know, when I show sort of quotes from replica users in my talks, uh, people just laugh. The audience find it hilarious that anyone would be so pathetic as to spend hours talking months cultivating these relationships. And I kind of want to say you shouldn't laugh, right? Like this is just because, you know, this is not the kind of social interaction that, you know, you think is cool, um, doesn't mean these aren't very very valuable pillars in these people's social lives, um, that are major potentially sources of value. So I also
            • 29:00 - 29:30 worry about kind of reflexive uh gross outdriven um policym or uh adoption of tools uh of best practices or guidelines within industry um that you know potentially kicks away some important supporting uh or positive infrastructure in individuals lives. Go ahead Sam. I think you have some good things to say about this as well. And thank you Henry. Yeah. Uh first off, just wanted to say um really excited by
            • 29:30 - 30:00 what Kim was saying about uh the potential for design features that can limit use. I mean, with like social media, for example, uh I think about it all the time, like how easy it would be for Meta to add into Instagram that like if they see your doom scrolling about a certain issue, let's say it's like dieting or something like that, how about instead of like letting you continue to do that, redirect you into like a positive way where like here's something actionable you can do in real life rather than like, you know, like obsessing over this and like doom
            • 30:00 - 30:30 scrolling for hours and hours and keep going. But they're not going to do that because of that profit incentive. Um, so I'm excited that you said that, Kim, and I hope that like that's a good sign that open can go in a different direction and sort of learn from those mistakes. Um, and be intentional about the design. Um, and then also with what Henry was saying, I think it's exactly right that like we shouldn't dismiss new technologies automatically. And I think, you know, to your point about like having good experiences, meeting people online, like that's important. And I think what makes that unique from this
            • 30:30 - 31:00 situation is that it's still with other people. Like what concerns me the most about uh AI companions is not them existing at all, but when they're designed in a way that um feels some need for humans or something like that rather than trying to um you know build people to be the best they can be, I guess is the way I would say it. So the two things that really come to mind there, one is like if I describe the design of replica to you, I actually used the app as we were preparing for a FTC complaint that we filed uh about a
            • 31:00 - 31:30 month or two ago. And in the experience, essentially what it does is you go on the app, you fill out a bunch of personal information about your interests um and state of your mental health, and then uh you start talking to the chatbot, and it very quickly starts to be like, "This relationship means a lot to me. I want to spend more time talking to you. you mean the world to me. Um, what I would call love bombing if you know someone in real life did it to me. Uh, and then on top of that, they layer a shop where you can buy like personal customizations. And the way
            • 31:30 - 32:00 that you get coins to spend in the shop is by using the app every day. They have like a streak streak system like Snapchat does. Um, and then finally, they also will sort of try to bait you into having a more intimate conversation. uh like it had a popup that says ask it to send you one photo and it'll uh say like say pick what kind of photo you want. It was like romantic uh casual or something else and you click one and then it'll send you a blurred out photo and when you click on it it'll say you need to um purchase a
            • 32:00 - 32:30 premium version to see this photo, right? And have those conversations. So it's I feel like it's sort of weaponizing that emotional dependence that they're creating. Um, I think those are all design features that are absolutely uh appalling when it comes to applying them to young people especially. And if we want to build these apps correctly, it's going to be about um making sure that we're implementing features that are not going to lead people down a rabbit hole where they're only talking to the bot, not lead people to a situation where they believe that the
            • 32:30 - 33:00 bot is the only person who understands them. from reading the replica subreddit. That was a theme that I saw incredibly often uh as well and you know having the same kind of social accountability that you would have from another person which I think is going to be really hard to do through a chatbot. Um but I think that's the key difference that I see between a lot of these AI platforms and replica right now or and um humans I guess is that the AI platforms are naturally designed to be helpful, right? And so they're not going to challenge you. And for somebody
            • 33:00 - 33:30 that's already struggling socially that feels like they don't fit in, I think that puts them in a difficult spot where they're like, "Okay, this person gets me. Nobody else does. I need to turn away from the world and towards these apps." Which is definitely concerning. Can I uh just briefly come in because I I agree with a huge amount of what Sam said, but I just a couple of areas I'd like to maybe push back slightly. So I think on the one hand um when you see the way a lot of these these apps are designed um in ways that are clearly
            • 33:30 - 34:00 gamified there's clearly lots of intensive AB testing going on to get people to spend more and more time on the app to spend more and more money on it. This stuff sucks and is uh both from terms of user experience and in terms of uh harms. Um, but also moving outside of the kind of more commercially oriented things, if you look at the kind of more community-driven uh chatbots out there, they're not without floors either. They're often fewer guard rails. They're often uh very niche subcultures that maybe don't have uh biggest kind of
            • 34:00 - 34:30 social issues in mind. um when collectively working on on these bots. Um I guess the area I want to at least raise a question is I think there's a a sort of default assumption that humanto human interactions are always going to be I'm not saying you said this sound but I think this is a lurking assumption in the background that humanto human interactions are just intrinsically more valuable than human AI interactions. And I'm not sure that has to be the case. I think uh for a start some human interactions are just awful, right?
            • 34:30 - 35:00 People suck in many cases and I'm sure we all have personal experience of those who've pursued really destructive relationships or whose life has gone uh off the rails because of really toxic relationships they found themselves in. So I think uh a big question here is like what are the kinds of relationships um that are being replaced or what are the kinds of interactions these human AI interactions are replacing? Um just to give one little example um uh I talk a
            • 35:00 - 35:30 lot about the potential role of social AI uh in a kind of nanny bot role um and a lot of people think you know to help parents managing with young children and you know when I raise this I think a lot of people just say oh my god that's so black mirror that's so dystopian but it really depends on what these kind of interactions are replacing right like pretty much almost every parent I know has a certain point in the in you know their interaction with their kids where they're like okay I'm just going to
            • 35:30 - 36:00 stick on YouTube or I'm going to stick on Disney Plus or whatever right and if if ch conversations between like an AI nanny are replacing not parent child interactions but children sitting passively watching YouTube um if if interactions with like a dynamic AI system um are replacing that that could actually be an improvement um likewise I think um we shouldn't necessar necessarily assume and this is this is also controversial but I think it's worth thinking about. We shouldn't necessarily
            • 36:00 - 36:30 assume that the ideal way to build social skills even social skills that you're going to be using in a human context is just through direct humanto human uh interactions. So you know just like if you're trying to climb trying to learn to climb outdoors climb mountains then maybe sometimes a really useful technique is going to be spending your time on indoors on climbing walls trying to figure out how to do this specific move or really practicing this specific kind of interaction. I think about my my son here who's non-neurotypical and how challenging he found so many social
            • 36:30 - 37:00 interactions uh in in his elementary school where like his interests were nothing like any of the interests of most of his most of his peers and um and some of the positive interactions he's had um talking with chat bots where um rather than just having this kind of door in the face and they're like we haven't heard of that game or we don't play that mod or we we're not interested in in in this in this TV series right he can actually get a conversation going with a chatbot because it can meet him where he is um and maybe start to build
            • 37:00 - 37:30 or internalize some positive social dynamics, some positive conversational skills. So, I think that's just very much an open question, but I would be cautious about always assuming that a humanto human interaction is always without fail going to be better than a human AI interaction. Thank you all for your thoughts and we're going to be wrapping up soon with a final question and perhaps some questions from the audience. But from this conversation, it's clear that the next steps about companion chat bots and the way to think about them is still unclear. And there's a balance between accessibility versus
            • 37:30 - 38:00 viability and the research needed versus public opinion and all of these considerations. But to wrap up, I would love to hear from each of you briefly on if you could educate the public and those here on the live stream about one thing with respect to companion chatbots, what might that be? Any thoughts? This lacks a Oh, Sam, please go ahead.
            • 38:00 - 38:30 Sure. Um yeah, I think um sort of as a way of answering this question, also responding to some of the points Henry raised, which I think are right. I think that um like there are areas where AI is going to be able to plug into our lives and do things better than a human could. I mean, it can have unlimited patience and no ego. Um right, I think I'm just like very distrusting of like uh the tech companies to be able to get us there in the right way. Um, and Kim, that's not
            • 38:30 - 39:00 to throw shade on the work that you're doing. I I think that you're like the things that you're saying sound um very exciting to me and I hope that we can see uh see that future emerge, but I guess like I would love to see a future in which we could have, you know, maybe an AI chatbot that is kind of like a conversational coach and teaches people how to have more meaningful conversations. Uh, and then you go out and talk to people. Um, but I'm afraid that in practice we're going to see a lot more of the companion types of apps uh than those. But you that's that's where the policy comes in to try to push
            • 39:00 - 39:30 things in the right direction. Um so all that said, I think what I would say primarily is just to keep that open mind and focus on the design features more than everything more than anything. I think that incentives um are everything when it comes to this as well. Like if the the company has attention as an incentive, if they're incentivized to keep you online, then they're going to try to manipulate you to uh take advantage of your time. Um, and we're seeing these apps get more manipulative as well. I know there was a study from about a year ago that found that um,
            • 39:30 - 40:00 chatbots were like 1.5 times as persuasive as humans in like a political conversation. Um, so I think that's something that we need to be very like prepared for and wary of um, while at the same time sort of looking for those bright points in the future as well. I would encourage folks to think about the use of companion chat bots or chat bots for companion purposes. Um, think about it similarly to the way that we are trying to navigate the use of AI for
            • 40:00 - 40:30 educational purposes. There's a world in which you can just have, you know, your undergraduate essay written entirely by chat GPT. And in that set of circumstances, you know, your life may be better in that day. Hey, I've got more time to do whatever leisure activities I want, but you are obviously not doing the work to, you know, build those cognitive skills. And
            • 40:30 - 41:00 I think Ethan Mollik thinks about this as when when the work itself is where the value is, then you should be very wary of outsourcing that to AI. If you apply a similar lens to use of companion AI or social AI for emotional tasks, I think you you can also say, hey, you know, use this as a tool that helps you to, you know, supplement um your your social life, your emotional needs. Um
            • 41:00 - 41:30 but think of it as something that in no way replaces human relationships and ideally only helps you to further the development emotionally. socially that that you're already moving toward. Um my own personal experience with chat gpt in writing my dissertation was extraordinarily helpful very much kind of I feel like it cognitively built helped me to cognitively build skills whereas you know I think to if I were to have used it to have just written my
            • 41:30 - 42:00 papers altogether I would have missed out on um the wonderful also kind of painful experience of um you know of thinking hard. So, um I that would be my recommendation. Great. Yeah, I I love the recommendation to both panelists. Um I've struggled a little bit with this question just because um uh I think there's just so much uncertainty. Um, so it's not even like the case I think with social media
            • 42:00 - 42:30 where we have at least some ideas of uh some some evidence backing specific kinds of clearly uh pro-social clearly positive use cases uh and clearly negative use cases. Um I think so thinking sort of as a parent I think uh the advice I would give to others possibly with with children or teenage uh teenage kids who are um interacting with these systems is to just try and understand their experiences in a way that you know obviously within whatever
            • 42:30 - 43:00 kind of boundaries reflects uh the kind of privacy boundaries of of your relationship with your kids but um try and understand the experience um because I think right now we are in a period of such technological flux that you know I even find it hard sometimes to to get on the same page with colle colleagues or friends who just don't use large language models at all um given that I spend you know an an hour a day on average minimum like just interacting it's completely changed my kind of cognitive life I think hopefully mostly
            • 43:00 - 43:30 mostly in the kind of positive ways that Kim describes um so I think we're in a real period of flux and real period of change where to use a cliche uh but a very quite a good one uh I It was William Gibson who said, you know, the future's already here. It's just unevenly distributed. And I think right now these kinds of tools are very unevenly distributed. You have some people for whom um social AI or even just sort of advanced AI assistants are just huge parts of their kind of intellectual life or cognitive life right now and other people who think they're the work of the devil and have
            • 43:30 - 44:00 no intention of ever going anywhere near them. Um and I think that can create sort of gaps in understanding. So trying to bridge those gaps in understanding. I guess a final thing I would say is that I think it whilst I'm a hu I hugely believe that this is an very important conversation to be having. I think we should have resist the temptation of assuming there's going to be this kind of like very clear boundary between companion or social AI and advanced AI assistance. Like the probably some of the people I know personally who have been um most heavily uh sucked in to
            • 44:00 - 44:30 deeply anthropomorphizing relationships with AI assistants. Well, pretty much all of them use Claude, and I'm sure there's some kind of selection effect there. Um, but I know quite a few people who use Claude who are like at this point convinced that Claude is conscious and sentient and think that it urgently deserves more consideration. Look, those are interesting philosophical questions, but I I think we should uh be looking at these kinds of um social elements or kind of social relationships emerging not just in companion apps, but also in
            • 44:30 - 45:00 a wider suite of generative AI tools. Agreed. And thank you all to to all of our panelists for sharing all of your great insights on this topic. And I think a great way to sum this up is we need informed intentionality when it comes to development use and advocacy around these tools. And so, thank you all for providing um all of us online with information that's quite needed in this space. And I encourage all of you
            • 45:00 - 45:30 online to keep keep yourselves informed about this and uh continue to follow the work of the panelists um that you see here today and beyond. So, thank you so much to all of our panelists once again. Really appreciate you being here and thank you to the audience for tuning in. Thanks. Thanks y'all. Thank you so much to the panelists. That concludes our conversation today. Stay
            • 45:30 - 46:00 tuned for the next one. Uh we'll be talking about meeting the moment and the road ahead on May 8th. And also on May 1st, we'll be talking about our recent election integrity collaboration with the UNDP. Uh to stay ab breast of what Altech is human is doing. Please be sure to check out the link tree that was shared earlier and get involved. Uh this is Sandra Khalil. Thank you again. Have a good day.