Curbing misinformation on social media ahead of the U.S. election
Estimated read time: 1:20
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.
Summary
As the U.S. presidential election approaches, social media platforms are intensifying efforts to curb misinformation. Twitter, Facebook, and others have implemented new rules to prevent the spread of false or misleading information. This includes labeling tweets that claim premature election victories and adjusting algorithms to limit the visibility of flagged content. These changes, while timely, may still cause confusion and are part of broader attempts to address the challenges of misinformation without infringing upon free speech. Efforts include distinguishing between satire and factual content and addressing cyber security concerns around hacked materials.
Highlights
Social media platforms are intensifying efforts against misinformation ahead of elections. β οΈ
Twitter will label tweets with premature election results to limit spread. βπ£οΈ
Satirical content might need specific labels to prevent misconceptions. ππ
Policies on misinformation are evolving, but there's still room for improvement. πβ¨
Conspiracy theories, if not handled correctly, can spread even faster. ππ€―
Key Takeaways
Social media platforms are implementing new rules to decrease misinformation ahead of the U.S. election. π±π³οΈ
Twitter limits sharing of flagged tweets, making it difficult to spread misinformation. π«ποΈ
There's a growing effort to distinguish satire from real news to protect users from being misled. π€π°
Misinformation experts recommend not addressing conspiracy theories until they reach a certain level of public awareness. ππ
Laws governing social media need updates to better address modern challenges of misinformation and free speech. βοΈπ»
Overview
As the U.S. presidential election looms, the spotlight is on social media platforms, which are stepping up their game to combat misinformation. They've rolled out changes that make sharing flagged tweets harder, especially those proclaiming unverified results. But, these measures might not be foolproof and are sparking a debate on whether they are arriving on time or too late to dodge the chaos misinformation leaves in its wake.
Amidst all these efforts, platforms like Twitter and Facebook face the dilemma of being too little too late. Policies that aim to shield users from falsehoods sometimes inadvertently fuel conspiracy theories by blocking content, causing people to doubt the platforms' motivations. Itβs a tricky dance between curbing false information and not stepping into the realm of censorship.
Moreover, there's a push for clearer distinctions online. Should satire be flagged akin to fake news? Implementing discernible labels for comedic content can prevent its misinterpretation. Plus, tightened cybersecurity measures are in place to guard against hacked information dumps aimed at swaying public opinion. All these bring to light the ongoing tug-of-war between up-to-date policies and protecting free speech online.
Chapters
00:00 - 01:00: Social Media and the U.S. Election With the upcoming presidential election, social media platforms like Twitter and Facebook are intensifying their measures to prevent misinformation. They are altering their regulations in response to concerns over potentially harmful information regarding the election results, which may not be confirmed until days or even weeks post-election. Media outlets like Axios are closely monitoring these developments.
01:00 - 02:00: Efforts by Social Media Platforms The chapter, 'Efforts by Social Media Platforms,' revolves around a discussion hosted by Emery and Vlad, featuring a guest named Sarah. The focus is on Twitter's proactive steps to manage misinformation as the election night approaches. Twitter plans to restrict the dissemination of misinformation by imposing limitations on the sharing of tweets identified as false or misleading, particularly those concerning political candidates. This strategy is part of a broader effort to ensure the integrity of information shared during critical times like elections.
02:00 - 03:00: The Challenge of Misleading Information This chapter addresses the issue of misleading information, focusing on actions taken by Twitter to combat it, such as labeling tweets that prematurely announce results of events like races. The chapter highlights Twitter's decision to limit engagement with such misleading tweets by restricting actions like retweeting and liking, and by placing more labels on misinformation. These measures aim to control the spread of unverified or false information on their platform.
03:00 - 04:00: Conspiracy Theories and Social Media This chapter discusses how social media platforms are changing the way content is shared in users' feeds to reduce the spread of viral misinformation. By altering algorithms to prevent users from seeing trending content based on others' likes, social media aims to decrease the dissemination of conspiracy theories. The debate arises whether these measures are adequate or too late to prevent further confusion among users.
04:00 - 05:00: Policies and Enforcement Issues This chapter delves into the complexities of policies and their enforcement, especially in the context of misinformation. It highlights the inconsistency in identifying and flagging misinformation, using examples like differing public statements from high-profile individuals on topics such as mask-wearing. The chapter underscores the challenge for the average person in understanding what qualifies as misinformation amid conflicting information.
05:00 - 06:00: Impact of Blocking Misinformation The chapter discusses the impact of blocking misinformation on social media platforms. It begins with an example of a tweet by a president that was blocked for containing misinformation, highlighting the challenge in distinguishing misinformation from factual corrections. The narrative delves into the timing and effectiveness of such measures, questioning if they are too little, too late, especially when some are designed to address election-related environments. The discussion encapsulates the complexities and nuances involved in moderating content in real-time.
06:00 - 07:00: Social Media Progress Since 2016 The chapter titled 'Social Media Progress Since 2016' discusses actions taken by major platforms like Facebook and Google to prevent the misuse of their advertising systems around election times. It highlights proactive measures such as prohibiting ad purchases after polls close on election day. The discussion also critiques these platforms for allowing harmful conspiracy theories, like QAnon, to spread unchecked for years before implementing significant changes. Despite some advancements, the chapter suggests some efforts might be too late to counteract already entrenched misinformation.
07:00 - 08:00: The Streisand Effect The chapter titled 'The Streisand Effect' discusses the challenges of policy enforcement, emphasizing how current approaches seem reactive and unstructured. It describes a situation akin to 'playing whack-a-mole', where policies are rapidly crafted in response to emerging situations. This reactive policy-making leads to confusion about the consistent enforcement of these policies. The chapter highlights concerns about the unintended consequences of such an approach and reflects on how it might affect different stakeholders, like journalists.
08:00 - 09:00: Twitter's Actions on Hacked Materials The chapter focuses on Twitter's challenges in handling hacked materials and misinformation, particularly in the context of journalism and satirical content. It highlights the ongoing struggle of Twitter to develop effective policies and technologies for addressing misinformation. The chapter also raises concerns about the difficulties in distinguishing between genuine misinformation and satirical content, suggesting that it will take a long time to resolve these issues. There is an underlying worry about the broader implications of these unresolved challenges on public discourse.
09:00 - 10:00: Censorship vs. Free Speech The chapter "Censorship vs. Free Speech" discusses the frustration surrounding tech platforms' decisions, particularly how they handle content moderation. It delves into the perception of bias on social media platforms and how actions like blocking tweets can fuel conspiracy theories. The topic points to ongoing debates about whether these efforts by tech companies are adequate or belated in addressing misinformation and free speech concerns.
10:00 - 11:00: The Complexity of Labeling Content The chapter discusses the challenges and controversies surrounding the labeling of content on social media platforms. It highlights how attempts to block certain content by these platforms often inadvertently fuel conspiracy theories. The conversation reflects on the efforts made by these platforms over the past four years, noting that while there has been improvement, there is still significant work to be done. The main criticism is directed towards the actions taken by social media companies and how they compare to their strategies in 2016, indicating a better but imperfect system.
11:00 - 12:00: Future of Online Content Differentiation In this chapter, the focus is on the evolution and improvement of online content regulation especially in the wake of significant events like the 2016 elections. The chapter discusses how the initial efforts were heavily centered around foreign interference, leading to major public and governmental backlash, including congressional hearings and boycotts. Since then, online platforms have significantly enhanced their strategies, now also effectively addressing domestic interference and preventing the spread of conspiracies originating within the country. The enhancements reflect a broader understanding and differentiated approach towards various kinds of content threats.
Curbing misinformation on social media ahead of the U.S. election Transcription
00:00 - 00:30 with the highly anticipated presidential election just days away social media platforms are boosting their efforts to curb the spread of misinformation twitter facebook and other popular networks are now changing their rules out of concern about potentially dangerous or misleading information about the november results which may not be official until days or maybe even weeks after the presidential vote so axios media reporter sarah fisher is joining us now with a look at the various plans that they are putting
00:30 - 01:00 into action hey sarah we always love it when you're on this show i always i feel like i'm so much smarter after you uh you talk with us so explain to us what twitter is doing to brace for election night good to see you emery and vlad yes they're doing a lot of things first and foremost they're going to limit the spread of misinformation by making it harder to share tweets that have been flagged as having misinformation or as having things that are false so for example if a candidate
01:00 - 01:30 announces on twitter that they've won a race but it hasn't been officially called yet that they want it twitter is going to put a label on that tweet and they're going to make it a lot harder to share it so you can't necessarily retweet it or like it some other actions that they're taking they're going to be labeling more tweets tweets that previously had misinformation that they might not have put labels on they're going to start putting those labels on now they're also going to limit what shows
01:30 - 02:00 up in your feed so before they made these changes if something was going viral it might show up in your feed as x y and z liked it they're no longer going to put it in your feed so that way if something's going viral that has incorrect misinformation you're not going to be inclined to see it and then share it as well i guess the question sarah with regards to how other social media sites are handling it is is this too little too late and does it so confusion more confusion
02:00 - 02:30 when it's not really clear what the criteria is in other words there are some things that are floating out there that are clearly misinformation but that they slide through um and then other things get flagged and it's sort of not it doesn't make sense sort of like the president talking about masks and dr fauci talking about masks i mean it's too sort of totally different things that for the common person makes it really really hard to know what the bar is okay and i'm just gonna jump into vlad's question as kind of an example you know
02:30 - 03:00 one of the only tweets that where i ever had blocked on my own page was a tweet that the president had sent out that was blocked that i was actually clarifying the information identifying the misinformation but it just got thrown sort of in the proverbial trash uh like all the rest of the stuff these are great questions so let's take the first one is it too little too late some of these changes address a specific environment related to the election and voting and so i actually think that they were
03:00 - 03:30 timely enough to address them so for example facebook says you can no longer purchase ads after the polls close close on election day google just made that announcement as well that's not too little too late we're not at election day yet they made this policy beforehand so i think that's pretty smart to your question though about is some of these other changes too little too late i mean yeah there's massive conspiracy theories like q anon that are now in the public conscious because these platforms let them spur for years and years before taking action
03:30 - 04:00 you know this month to your point about policy enforcement and whether or not it's sort of ad hoc i think that they're right now playing whack-a-mole when they see an instance come up they scramble and they create a policy that tries to address it but in doing that they're inadvertently creating confusion around how these policies are ubiquitously informed and i think emery what you were saying or i'm sorry enforced what you were saying is a great example of that you could have journalists that are just trying to
04:00 - 04:30 clarify a piece of misinformation that gets blocked because twitter hasn't quite yet come up with the technology and the policies to address how journalists talk about these types of things another issue we're seeing is comedy what happens if you have a very obvious secure satirical account that's trying to make fun of a situation but twitter accidentally flags it as misinformation all these problems are going to take years and years to hammer out and i worry sometimes that people are going to
04:30 - 05:00 get so frustrated why tech platforms hammer these out that they're just going to say to hell with this these social platforms are biased and that's going to create some problems yeah that you know that's kind of like when you asked about whether it's sort of too little too late vlad i i thought about the fact that you know when you see that a tweet has been blocked this also feeds into these conspiracy theories because part of what they say is the me
05:00 - 05:30 the mainstream media doesn't want you to know this and so instantly like without sort of trying by blocking some of the stuff they then add more fuel to the conspiracy theorist fires if you will but so we've been like man thumbs down for what these social media platforms are doing but in general are they better than they were at least in 2016. i mean they've had four years to try and get a bet to try and do it right they're much better i mean they were not
05:30 - 06:00 on top of this at all in 2016 which is what's led to a flood of congressional hearings it's what's led to boycotts and investor pressure they're much better and one thing i think they've gotten particularly better at is in the math uh in the aftermath of the 2016 election they were so focused on foreign interference that i don't think they had their eye on domestic interference on you know conspiracies that were spreading from accounts and from movements within the united states that's something that they've gotten so much better at combating and taking
06:00 - 06:30 action on to address your point about inadvertently making something worse than it is this is something that's called the streisand effect where if you address something by blocking it you might inadvertently make it even worse and i've talked to misinformation experts the general sense here is you don't want to touch a conspiracy theory you don't want to address it you don't want to block it until it's hit 10 of the mainstream population once 10 percent of people know about it that's a good time to take action on it
06:30 - 07:00 but if you try to address it and that might even mean stop the spread of it before it hits that 10 threshold you might inadvertently make it worse by drawing more attention to it uh so let me ask you about this sarah twitter made headlines this week when it blocked the new york post article about vice president joe biden's son hunter what was the company's reasoning behind the move and do you think this went beyond the platform's agenda to curb misinformation yes great question so a lot of people thought twitter did that because it was
07:00 - 07:30 trying to moderate content actually this is not a content moderation issue for twitter this is a cyber security issue for twitter twitter has what's called a hacked materials policy facebook and google and other platforms have similar policies after the dnc emails leaked in 2016 platforms realize that there is going to be efforts by bad actors to illegally obtain information slightly tweak it and dump it right before the election to sway the election
07:30 - 08:00 results as a result when twitter or facebook or google see something a story like this that could potentially smell like a hacking leak immediately they put up their misinformation systems and they just shut it down now twitter last night conceded hey i think we went a little too far instead of blocking it maybe we should have just done what facebook did and slow the spread of it while we evaluate whether or not this really is a hacking leak vlad but unfortunately conservatives people on the right
08:00 - 08:30 were very quick to say twitter is censoring our views they didn't want to believe that this could this story could potentially be a cyber security problem a happening well speaking of censorship at what at what point does curbing misinformation become a violation of free speech oh that's the biggest question out there i mean look we have dealt with some of these issues in the past with traditional mediums so with broadcast and radio and newspapers you know there's a reason why when you turn on your mainstream cable
08:30 - 09:00 or your mainstream broadcast there's certain levels of nudity that you're not seeing there's certain uh requirements to the type of commercials that can be exposed we have laws that govern this we actually have a body the fcc that governs those mediums we don't exactly have that for social media platforms uh we have a very old law that governs them it kind of shields them from taking liability for misinformation on their platforms right now you have in congress a lot of people that are talking about whether or not we need to re-evaluate
09:00 - 09:30 the laws that govern these platforms so that we can have a clear distinction between censorship and empowering first amendment we're not there yet but i'm optimistic we've made such great strides between 2016 and 2020 it's a lot more awareness around the problem i think eventually maybe not in the next you know few years but eventually we're going to get to a point where we have laws that can better uh make these distinctions for us online you know it's just so difficult though i think sarah and anne-marie because
09:30 - 10:00 for example emery you talk about you often talk about people sharing weird posts on your facebook feed right and you sometimes try to correct uh those posts um the president of the united states sarah and emma you may not have seen this because it's just happened the last couple of hours but president of the united states retweeted a tweet with a headline from a satirical newspaper the babylon b it's not a real newspaper but the article and the headline the president retweeted it and added more information around it as if it was real and so i don't know i
10:00 - 10:30 mean if you don't know we work in the media world so if you you will probably know that babylon b is not a real newspaper it's a satirical newspaper but if you don't know that and you see it coming from the president of the united states and you're following his twitter feed and you've got to deal with the kids and your jobs then you might actually think that this is real this is i guess and i don't know what the role of social media is in a situation like that comedy is one of the toughest things to handle in terms of the first amendment
10:30 - 11:00 and by the way always has been this is why when you talk about first amendment rights you're always gonna see people like the founders and creators of south park um you know up talking about it because comedy is really hard to determine whether or not you put certain restrictions on it for the sake of public safety and public knowledge here's what i imagine is one day going to happen vlad one day you might have to register as being a satirical site and you'll have a specific label on your tweet or on your account so that if someone like the president
11:00 - 11:30 were to retweet it with additional context it would become more obvious to the user that this is satire the problem right now with a lot of social media sites is that all the branding blends together there's no way to easily distinguish if something is comedy if something is news everybody's adhering to the same formats and the same fonts the news lobbyists are going crazy about this they're trying really hard to make it so that you can better distinguish brands online but until we get to that point brad as you said uh
11:30 - 12:00 vlad as you said this is going to be really really hard for the average everyday american to filter through and it's not just comedy it's also opinion opinion journalism uh analysis and blogs versus true breaking news stories we're still right now trying to figure out how do we label all the stuff so that it's clearer on the internet uh sarah fisher ann marie we miss sarah being in the studio with us every tuesday like we had pre rona uh the thing about sarah
12:00 - 12:30 is you can take the conversation like a little to the left and a little to the right and she'll unpack everything like the knowledge because she has so much knowledge that you can go anywhere with her yeah you don't mean you don't mean politically left and right you mean like our wine no no the roads of our minds exit pretty much i guess i just know people will be adding us all day long on twitter if you say that so uh yes we do have long and winding conversations uh sarah fisher always great to see you thank you very much