135. Aumann's Agreement Theorem & Arguing to Learn | THUNK
Estimated read time: 1:20
Learn to use AI like a Pro
Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.
Summary
In this video, THUNK delves into Aumann's Agreement Theorem and how discussions between rational individuals should ideally lead to consensus. The theorem proposes that perfectly rational, evidence-updating individuals would ultimately converge in beliefs if they discuss openly. However, humans often fall short of this due to biases and egotistical attitudes towards one's own beliefs. The video also introduces a study where re-framing arguments as opportunities to learn, rather than to win, results in more respectful discussions. This approach brings conversations closer to the rational ideal posited by Aumann.
Highlights
Aumann's 1976 theorem models rational debate leading to consensus. π
Humans struggle to meet this ideal due to cognitive biases. π€―
Experiments show debating to learn encourages open-mindedness. π
Arming yourself with the goal of learning nurtures rational dialogue. πΏ
Discussions that embrace subjectivity over definitive answers end productively. π―
Key Takeaways
Aumann's Agreement Theorem suggests perfectly rational beings should eventually agree. π§
Human bias and self-regard often prevent reaching consensus in reality. π€
Re-framing arguments as chances to learn fosters healthier discussions. π
Experimental evidence shows tone influences the outcome of debates. π
Seeking to understand rather than to win can lead to a more rational exchange. π
Overview
In Aumann's Agreement Theorem, ideal rational beings are posited to always reach consensus after sharing all evidence and updating beliefs accordingly. This idea, although mathematically sound, is rarely observed in human debates where bias clouds judgement.
Human cognitive bias and the propensity to cling to personal beliefs often obstruct the path to agreement, deviating from the idealized theorem. By assuming superiority in knowledge, parties in a debate fail to update their beliefs even when faced with contrary evidence.
Interestingly, studies have shown that adjusting the framing of discussions from a competitive to a learning-focused approach can mitigate these issues. Conversations centered around learning bring attitudes closer to Aumann's rational ideal, fostering mutual understanding and respect.
Chapters
00:00 - 00:30: Introduction to Disagreements The chapter 'Introduction to Disagreements' begins with a reflection on personal experience, mentioning a disagreement about the cost of a new kitchen island. It emphasizes the nature of disagreements, acknowledging that while they can start with differing ideas, respectful discussion can lead to improved understanding between parties. The chapter points out that disagreements can end positively, with parties either better understanding each other or the situation at hand. It acknowledges that this positive outcome does not always occur.
00:30 - 01:00: Rational Disagreements and Aumann's Theorem The chapter discusses the concept of 'Rational Disagreements' through the lens of Aumann's theorem. It explores how rational people should ideally interact when they disagree, as opposed to how actual human biases often interfere with objective decision-making. A key reference is to Robert Aumann's 1976 paper 'Agreeing to Disagree', which models arguments between perfectly rational agents.
01:00 - 01:30: How Rational Agents Reach Consensus The chapter explores the concept of thinkers, specifically those who update their beliefs based on evidence, leading to eventual consensus regardless of initial beliefs. It highlights that when rational agents communicate truthfully and update their information, they will ultimately reach identical beliefs. The process of reaching consensus is discussed, though it is noted as not particularly surprising when considering rational updating of beliefs.
01:30 - 02:00: The Model of Disagreement and Switching Beliefs In a discussion between two androids, Android A recognizes Android B as a competent Bayesian who updates her beliefs rationally. When Android B expresses a different belief than Android A, despite her reliability, it suggests to Android A that he should reassess his confidence in his own beliefs. The disagreement itself becomes a basis for both parties to reduce certainty in their respective positions, signifying the essence of rational discourse among informed individuals. The chapter explores how conflict in beliefs between two rational entities can prompt reassessment and reduction of certainty in their original beliefs.
02:00 - 02:30: Relaxing the Conditions of Aumann's Theorem The chapter discusses the dynamics of disagreement resolution through the lens of Aumann's Theorem, which predicts that as two individuals share and incorporate each other's information, they should overshoot one another's positions repeatedly. For instance, if Person A leans left and Person B leans right on a political issue, after incorporating each other's reasons for their positions, Person A should find Person B to her right. However, the text suggests that such a phenomenon is rarely observed in real-life disagreements.
02:30 - 03:00: Reasons Humans Fall Short of Aumann's Standards The chapter discusses the stringent requirements of Aumann's agreement theorem, which include perfect rationality, truthfulness, and the ability to update beliefs perfectly with new evidence from both parties. It addresses how these conditions are challenging to meet. Despite these requirements, game theorists have attempted to relax the theorem's conditions, and it appears to hold even in less than ideal situations, suggesting its robustness even among those who are not perfectly rational or highly intelligent.
03:00 - 04:00: The Problem with Our Starting Assumptions The chapter titled 'The Problem with Our Starting Assumptions' discusses the human tendency to not always reach consensus or have perfectly logical reasoning, even in debates. Referencing a 2004 paper on honest disagreements, it suggests that humans often fail to meet ideal standards of reasoning due to the fact that not everyoneβs beliefs are self-consistent or aligned with evidence. It implies that while some people consider themselves reasonable, others might not recognize inconsistencies in their beliefs or evidence.
04:00 - 05:00: Arguing to Win vs. Arguing to Learn This chapter explores the difference between arguing to win and arguing to learn. It discusses how wishful thinking can often cloud judgment and how assuming one's superior intelligence or knowledge over others stunts the ability to update one's beliefs when faced with disagreement. The text criticizes the tendency to hold one's opinions as the absolute truth while dismissing dissenting voices, emphasizing the importance of acknowledging and re-evaluating our beliefs in light of differing perspectives to foster rational thinking.
05:00 - 05:30: Experiments on Argument Framing The chapter titled 'Experiments on Argument Framing' explores how arguments are often framed and perceived in discussions. It suggests that people tend to dismiss contradictory opinions as nonsense and may be egotistical in believing they have the right answers to everything. The chapter indicates that arguments, especially in Alman style, don't occur as effectively as they should if individuals were being genuinely reasonable. It highlights the paper authors' viewpoint that many people are somewhat dishonest, claiming not to favor their own opinions because they hold them, yet doing so in practice. The text implies that this is a common behavior among people.
05:30 - 06:30: Arguments as Learning Opportunities The chapter discusses the common occurrence of people arguing without resolving their differences due to the assumption that their perspective is the only correct one. It acknowledges a cynical view that people are inherently egotistical and resistant to changing their minds. However, the chapter leans towards optimism about human potential to overcome biases and engage in rational thinking. It highlights a study by cognitive science researchers proposing a method to facilitate more constructive disagreements, preventing conversations from being unproductive.
06:30 - 07:30: Conclusion and Call to Action The chapter discusses a study where individuals with opposing political views on subjects like abortion, gun control, and euthanasia were paired up. The effects of two different argument framing strategies were examined: 'arguing to win' and 'arguing to learn'. In 'arguing to win', participants were informed to outperform their counterparts, leading to heightened emotions, disregard for facts, and reliance on rhetoric.
00:00 - 00:30 I originally thought that getting a new island for the kitchen was going to be too expensive but I have to say you do have a good [Music] counterargument we've all been involved in disagreements before and we've all seen them go well or poorly sometimes people start off with different ideas discuss them respectfully and come away from the experience better than they started either with a better understanding of the other person or a better understanding of the situation or hopefully both some times not so much
00:30 - 01:00 which raises an interesting question how should it look when rational people disagree with each other if our brains weren't absurd engines of confirmation bias dedicating all their processing power to the task of not changing our minds instead of figuring out what's most likely to be true what would an argument look like we have something of an answer for this in 1976 mathematician Robert Alman published a paper titled agreeing to disagree where he modeled an argument between two perfect Asian
01:00 - 01:30 thinkers that is thinkers who update the probability estimates of their beliefs perfectly according to evidence he proved that regardless of what they initially believe they must eventually agree with each other after a finite amount of time that doesn't sound too surprising in and of itself if you plunk two Androids with different beliefs down in front of each other and let them talk it out truthfully updating each other's information about the world you'd probably expect them both to walk away with identical beliefs afterwards but it's how they get to consensus that's
01:30 - 02:00 really weird Android a knows that Android B is a proper basian and is updating her beliefs rationally according to new information the moment that b says that she believes something different than a does if she's a fairly knowledgeable and reliable person that conflict alone should cause a to update his probabilities for various beliefs the mere fact that there are two rational people who disagree about a particular point should make both of them less certain of their positions from the get go even more weirdly the
02:00 - 02:30 model predicts that as the two share information they should end up switching places multiple times overshooting each other's position repeatedly for example if a leans left and B leans right on some political issue the theorem says that after she's Incorporated A's disagreement into her calculations and learned some of his reasons for thinking that way she should find him to her right how often have you seen two people disagree like that now never yeah me
02:30 - 03:00 either but hey the requirements for Almond's agreement theorem are pretty strict both parties have to know that the other party is perfectly rational perfectly truthful updating their beliefs perfectly with new evidence that's a tall order right well game theorists have tried relaxing the requirements of the theorem in many ways and it seems to hold even in highly suboptimal scenarios the math shows that even people who aspire to be beian people who aren't super smart and who
03:00 - 03:30 don't trust each other too much should eventually reach consensus with their debate Partners no perfect Android brains required so what's the deal this 2004 paper our disagreements honest advances a theory as to why humans tend to fall short of the element standard stop me if this starts to sound familiar look I'm a reasonable person I think that everyone should have beliefs that are self-consistent and agree with the evidence but a lot of people just aren't smart enough to see that their belief Bel are mutually inconsistent or
03:30 - 04:00 they let wishful thinking govern their better judgment that's why these other people disagree with me it turns out that if one of your starting assumptions is that you're just smarter or better informed than anyone with a different opinion there's no real impetus to update the probabilities of your beliefs upon learning that people disagree with you we generally acknowledge that that's a terrible attitude for a rational person to hold and we get understandably upset when we recognize other people acting that way treating their own opinions as gospel truth and dismissing
04:00 - 04:30 any contradictory opinion is nonsense only an egotistical jerk would believe that they have the right answer to absolutely everything right but those Alman style arguments just don't seem to happen the way that they should if people were being good beian the paper authors take this as compelling evidence that practically everyone is being dishonest in some sense claiming not to prege their own opinions just because they happen to hold them but doing exactly that you me whenever ever we
04:30 - 05:00 argue and don't get anywhere it's because everyone in that discussion is tacitly assuming that only an idiot would believe anything different than they do a cynic would just leave it there people are egotistical jerks who will refuse to change their minds about anything but I've always been a bit of an optimist about the human capacity to overcome bias and approach rational cognition even if we never really get there and in this paper some cognitive science researchers put forward a seemingly effective method to Foster disagreements that are a little less
05:00 - 05:30 broken they paired up people who held opposing views on many controversial topics in modern politics things like abortion gun control euth in Asia that sort of stuff then examine the effects of framing their arguments in two different ways arguing to win and arguing to learn the general character of arguments to win should be familiar to anyone who's seen a faymore on Facebook when informed that they were trying to outperform their conversational partner tempers flared facts gave way to rhetoric and nobody
05:30 - 06:00 was convinced of anything when pulled afterward the participants indicated a stubborn certainty that there was only one right answer to the question their own however when informed that they were trying to learn as much as possible from their partner there was a stark difference in tone conversations tended to be more respectful and thoughtful and after the experiment ended both parties indicated an increased feeling of subjectivity that the right answer depended a lot on where you were coming from it might not have changed their minds but they converged on some sort of
06:00 - 06:30 consensus that it was less of a clear-cut issue than they originally thought does that pattern sound familiar the Almond agreement theorem might not Brook disputes about matters of fact but is perfectly fine with differences of opinion Androids might well disagree about their favorite flavor of ice cream or taste in music without being irrational by arguing to learn from their partner both parties in the experiment might not have updated their beliefs about the topic at hand specifically but by relegating it to a difference of opinion they actually reached a rational conclusion together
06:30 - 07:00 it's probably not a mistake that approaching arguments looking for new information brings people closer to the ideal of the agreement theorem for rational people looking to construct the most accurate beliefs they can that's what every argument should be in the first place maybe the next time you're debating someone you could ask yourself which thing you're doing and whether it would lead you to approach consensus in the alond fashion as weird as it might be for us humans do you think that it would be possible for humans to reach
07:00 - 07:30 agreement about all matters of fact please leave a comment below and let me know what you think thank you very much for watching don't forget to blah blah subscribe blah share and don't stop thking