We are searching data for your request:
Upon completion, a link will appear to access the found materials.
Do experiments exist that show that reasoning is [or isn't] highly biased by emotions?
I'm looking for experiments showing that people perceive a situation differently depending on how they feel about the circumstances - for example, you estimate your chances of winning in a chess game lower than they are once you've lost the queen.
Irrational Decisions Driven By Emotions
Irrational behaviour arises as a consequence of emotional reactions evoked when faced with difficult decisions, according to new research at UCL (University College London), funded by the Wellcome Trust. The UCL study suggests that rational behaviour may stem from an ability to override automatic emotional responses, rather than an absence of emotion per se.
It has long been assumed in classical theories of economics that people act entirely rationally when taking decisions. However, it has increasingly become recognized that humans often act irrationally, as a consequence of biasing influences. For example, people are strongly and consistently affected by the way in which a question is presented. An operation that has 40 per cent probability of success seems more appealing than one that has a 60 per cent chance of failure.
In the study, published in the journal Science, UCL researchers used a gambling experiment to establish the cognitive basis for rational decision-making. The goal of the task was to accumulate as much money as possible, with the incentive of being paid in real money in proportion to the money won during the experiment. Participants were given a starting amount of money (£50) at the beginning of each trial. They were then asked to choose between either a sure option or a gamble option (where they would have a certain chance of winning the entire amount, but also of losing it all). Subjects were presented with these choices under two different frames (i.e. scenarios), in which the sure option was worded either as the amount to be kept from the starting amount ("keep £20"), or the amount to be deducted ("lose £30"). The two options, although worded differently, would result in exactly the same outcome, i.e. that the participant would be left with £20.
The UCL study found that participants were more likely to gamble at the threat of losing £30 than the offer of keeping £20. On average, when presented with the "keep" option, participants chose to gamble 43 per cent of the time compared with 62 per cent for the "lose" option. Furthermore, there was a marked difference in behaviour between participants. Some people adopted a more rational approach and gambled more equally and consistently under both frames, while others showed a real aversion to risk in the "keep" frame while at the same time displaying high risk-seeking behaviour in the "lose" frame.
Brain imaging revealed that the amygdala, a region thought to control our emotions and mediate the 'fight or flight' reaction, underpinned this bias in the decision process. Moreover, the UCL study revealed that people with more rational behaviour had greater brain activity in the prefrontal cortex, a region known to be involved in higher-order executive processes, suggesting that their brains are better able to incorporate their emotions into a more balanced reasoning process.
Mr Benedetto de Martino, of the UCL Institute of Neurology, says: "It is well known that human choices are affected by the way in which a question is phrased. For example, saying an operation carries an 80 per cent survival rate may trigger a different response compared to saying that an operation has a 20 per cent chance of dying from it, even though they offer exactly the same degree of risk.
"Our study provides neurobiological evidence that an amygdala-based emotional system underpins this biasing of human decisions. Moreover, we found that people are rational, or irrational, to widely differing amounts. Interestingly, the amygdala was active across all participants, regardless of whether they behaved rationally or irrationally, suggesting that everyone experiences an emotional reaction when faced with such choices. However, we found that more rational individuals had greater activation in their orbitofrontal cortex (a region of prefrontal cortex) suggesting that rational individuals are able to better manage or perhaps override their emotional responses."
Materials provided by University College London. Note: Content may be edited for style and length.
2 STUDY 1
Participants were randomly assigned to one of two conditions: in the promoting emotion condition, they were shown a message highlighting the positive consequences of making decisions based on feelings in the promoting reasoning condition, they were shown a message highlighting the positive consequences of making decisions based on reasoning. These messages were taken from previously published work (Capraro et al., 2019 Caviola & Capraro, 2020 Levine et al., 2018 ). See Table 1 for the exact messages.
|Promoting emotion||Sometimes people make decisions by using feelings and relying on their emotions. Other times, people make decisions by using logic and relying on their reasoning. Many people believe that emotions lead to good decision-making. When we use feelings, rather than logic, we make emotionally satisfying decisions. Please answer the following questions by relying on emotions, rather than reasoning.|
|Promoting reason||Sometimes people make decisions by using logic and relying on their reasoning. Other times, people make decisions by using feelings and relying on their emotions. Many people believe that reason leads to good decision-making. When we use logic, rather than feelings, we make rationally satisfying decisions. Please answer the following questions by relying on reasoning, rather than emotions.|
2.1.2 Dependent variables
After reading the message, all participants took the following scale.
- Wear a face covering any time I leave home.
- Wear a face covering any time I am engaged in essential activities and/or work, and there is no substitute for physical distancing and staying at home.
- Wear a face covering any time I'm around people outside my household.”
All answers were collected using a 10-line “snap to grid” slider with three labels: “strongly disagree” at the extreme left, “neither agree nor disagree” at the center, “strongly agree” at the extreme right.
After the scale, participants were asked the following set of demographic questions: sex, age, race, political views, religiosity, whether they live in an urban area, whether wearing a face covering is mandatory in their county, whether they live in an area where shelter-in-place rules apply, whether they previously tested positive, whether they believe they will contract coronavirus and, if so, whether they believe they will recover from it relatively easily. At the end, there was a control question to prevent the potential intrusion of bots.
The design, the analysis and the sample size were pre-registered at: https://osf.io/hfjpw/?view_only=cc5aa039b96d4075a3c834c408091992. For this and for the following studies, we report all measures and conditions.
The experiment was conducted on May 28, 2020. The raw data of this and the following studies may be found at: https://osf.io/hfjpw/?view_only=cc5aa039b96d4075a3c834c408091992. The analysis code can be easily replicated by the reader following the analysis below.
2.2.1 Demographic characteristics of the sample
As pre-registered, we eliminated from the analysis subjects who did not pass the attention check and, for each multiple IP address or Turk ID, we kept only the first observation and discarded the rest. This meant deleting about 1% of the observations our main results remain qualitatively similar when including these observations. In doing so, we were left with 399 subjects. A posteriori sensitivity analysis shows that this sample size is sufficient to detect an effect size of d = 0.28, with power of 0.80 and with α = 0.05, two-tailed. In Table 2, we report the demographic characteristics of the sample for this and the following studies. We note that the sample is quite heterogeneous, although not representative: males and females are equally represented the age group 25–54 is overrepresented, whereas the age groups 18–24 and 65+ are underrepresented Whites are overrepresented, while Blacks or African Americans are underrepresented (Census, 2020 ).
|Demographic||Study 1 (N = 399)||Study 2 (N = 591)||Study 3 (N = 930)||All studies (N = 1920)|
|Prefer not to say||0.75||0.17||0.32||0.37|
|Race||American Indian or Alaska native||1.00||0.51||0.97||0.83|
|Black or African American||6.77||7.11||9.06||7.99|
|Native Hawaiian or other Pacific Islander||0||0||0||0|
- Note: Political view goes from 1 = “very left-leaning” to 7 = “very right-leaning,” with 4 = “center.” In the table we classified as “center” only those subjects who answered “center.”
2.2.2 The effect of promoting emotion versus reasoning on intentions to wear a face covering
We first build the composite variable “intentions to wear a face covering” by taking the average of its three items (αemotion = 0.932, αreason = 0.924). The average intention to wear a face covering when promoting reasoning is Mreason = 7.38 (SDreason = 3.00) the average intention to wear a face covering when promoting emotion is Memotion = 6.61 (SDemotion = 3.24). Wilcoxon rank-sum shows that the distribution of intentions to wear a face covering when reasoning is promoted is statistically different from the corresponding distribution when emotion is promoted (z = 2.366, p = .018).
Cross-disciplinary cooperation is needed to save civilization
What, then, can be done? Such technological challenges go beyond the reach of a single discipline. CRISPR, for example, may be an invention within genetics, but its impact is vast, asking for oversight and ethical safeguards that are far from our current reality. The same with global warming, rampant environmental destruction, and growing levels of air pollution/greenhouse gas emissions that are fast emerging as we crawl into a post-pandemic era. Instead of learning the lessons from our 18 months of seclusion — that we are fragile to nature's powers, that we are co-dependent and globally linked in irreversible ways, that our individual choices affect many more than ourselves — we seem to be bent on decompressing our accumulated urges with impunity.
The experience from our experiment with the Institute for Cross-Disciplinary Engagement has taught us a few lessons that we hope can be extrapolated to the rest of society: (1) that there is huge public interest in this kind of cross-disciplinary conversation between the sciences and the humanities (2) that there is growing consensus in academia that this conversation is needed and urgent, as similar institutes emerge in other schools (3) that in order for an open cross-disciplinary exchange to be successful, a common language needs to be established with people talking to each other and not past each other (4) that university and high school curricula should strive to create more courses where this sort of cross-disciplinary exchange is the norm and not the exception (5) that this conversation needs to be taken to all sectors of society and not kept within isolated silos of intellectualism.
Moving beyond the two-culture divide is not simply an interesting intellectual exercise it is, as humanity wrestles with its own indecisions and uncertainties, an essential step to ensure our project of civilization.
While the Stanford Prison Experiment was originally slated to last 14 days, it had to be stopped after just six due to what was happening to the student participants. The guards became abusive, and the prisoners began to show signs of extreme stress and anxiety.
- While the prisoners and guards were allowed to interact in any way they wanted, the interactions were hostile or even dehumanizing.
- The guards began to behave in ways that were aggressive and abusive toward the prisoners while the prisoners became passive and depressed.
- Five of the prisoners began to experience severe negative emotions, including crying and acute anxiety, and had to be released from the study early.
Even the researchers themselves began to lose sight of the reality of the situation. Zimbardo, who acted as the prison warden, overlooked the abusive behavior of the jail guards until graduate student Christina Maslach voiced objections to the conditions in the simulated prison and the morality of continuing the experiment.
Zimbardo's Stanford Prison Experiment
The Stanford Prison Experiment was a landmark psychological study of the human response to captivity, in particular, to the real world circumstances of prison life. It was conducted in 1971 by Philip Zimbardo of Stanford University.
Subjects were randomly assigned to play the role of "prisoner" or "guard". Those assigned to play the role of guard were given sticks and sunglasses those assigned to play the prisoner role were arrested by the Palo Alto police department, deloused, forced to wear chains and prison garments, and transported to the basement of the Stanford psychology department, which had been converted into a makeshift jail.
Several of the guards became progressively more sadistic - particularly at night when they thought the cameras were off, despite being picked by chance out of the same pool as the prisoners.
The experiment very quickly got out of hand. A riot broke out on day two. One prisoner developed a psychosomatic rash all over his body upon finding out that his "parole" had been turned down. After only 6 days (of a planned two weeks), the experiment was shut down, for fear that one of the prisoners would be seriously hurt.
Although the intent of the experiment was to examine captivity, its result has been used to demonstrate the impressionability and obedience of people when provided with a legitimizing ideology and social and institutional support. It is also used to illustrate cognitive dissonance theory and the power of seniority/authority.
It can be argued that the conclusions that Professor Zimbardo and others have drawn from the Stanford Prison Experiment are not valid. Professor Zimbardo acknowleges that he was not merely an observer in the experiment but an active participant and in some cases it is clear he was influencing the direction the experiment went.
For example, Professor Zimbardo cites the fact that all of the "guards" wore sunglasses as an example of their dehumanization. However, the sunglasses were not spontaneously chosen as apparel by the students they were given to them by Professor Zimbardo. The student "guards" were also issued batons by Professor Zimbardo on their first day, which may have predisposed them to consider physical force as an acceptable means of running the "prison".
Professor Zimbardo also acknowleges initiating several procedures that do not occur in actual prisons, such as blindfolding incoming "prisoners", making them wear women's clothing, not allowing them to wear underwear, not allowing them to look out windows, and not allowing them to use their names. Professor Zimbardo justifies this by stating that prison is a confusing and dehumanizing experience and it was necessary to enact these procedures to put the "prisoners" in the proper frame of mind. However, it opens the question of whether Professor Zimbardo's simulation is an accurate reflection of the reality of incarceration or a reflection of Professor Zimbardo's preconceived opinions of what actual incarceration is like.
Does Zimbardo's study explain Abu Ghraib abuse?
The human rights abuses that occurred at the Abu Ghraib prison under the authority of the American armed forces in the aftermath of the 2003 Iraq war may be a recent example of what happened in the experiment in real life. Soldiers were thrust into the role of prison guards and began to sadistically torment prisoners there and at other detention sites in Afghanistan and Iraq. Many of the specific acts of humiliation were similar to those that occurred in the Stanford Prison Experiment, according to Zimbardo.
This theory has been challenged by allegations by Seymour Hersh in the New Yorker that these soldiers were in fact acting under direct orders of their superiors as part of a top secret Pentagon intelligence gathering program authorized by Secretary of Defense Donald Rumsfeld.
The Research: The Still Face Experiment
The Still Face Experiment illustrates the power of emotion coaching and the importance of turning toward your child’s bids for connection.
The Still Face Experiment illustrates the power of emotion coaching and the importance of turning toward your child’s bids for connection.
The Still Face Experiment illustrates the power of emotion coaching and the importance of turning toward your child’s bids for connection.
Dr. Edward Tronick of UMass Boston’s Infant-Parent Mental Health Program conducts research on how mothers’ depression and other stressful behaviors affect the emotional development and health of infants and children.
Jason Goldman published Thoughtful Animal about Tronick’s 1975 experiment, the impact it had in understanding child development, and how it’s being used, including to predict child behavior:
In 1975, Edward Tronick and colleagues first presented the “Still Face Experiment” to colleagues at the biennial meeting of the Society for Research in Child Development. He described a phenomenon in which an infant, after three minutes of “interaction” with a non-responsive expressionless mother, “rapidly sobers and grows wary. He makes repeated attempts to get the interaction into its usual reciprocal pattern. When these attempts fail, the infant withdraws [and] orients his face and body away from his mother with a withdrawn, hopeless facial expression.” It remains one of the most replicated findings in developmental psychology.
Once the phenomenon had been thoroughly tested and replicated, it became a standard method for testing hypotheses about person perception, communication differences as a result of gender or cultural differences, individual differences in attachment style, and the effects of maternal depression on infants. The still-face experiment has also been used to investigate cross-cultural differences, deaf infants, infants with Down syndrome, cocaine-exposed infants, autistic children, and children of parents with various psychopathologies, especially depression.
The video below portrays the natural human process of attachment between a baby and mother and then the effects of non-responsiveness on the part of the mother:
As Rick Ackley suggests in this article from his blog The Genius in Children, “While the video shows the importance of mother-child attachment, it also reveals something else of vital importance to parents and all other educators. Watch it again. Is the baby experiencing a loss of attachment or a loss of agency?”
Agency refers to the subjective awareness that one is initiating, executing, and controlling one’s own actions in the world. When we “still face” our children by ignoring their expressions of emotion, for example, they may experience a loss of agency. Show your child respect and understanding in moments when they feel misunderstood, upset, or frustrated. Validate their emotions and guide them with trust and affection. Your child’s mastery of understanding and regulating their emotions will help them to succeed in life. Dr. Gottman calls this being an “Emotion Coach.” The five essential steps of Emotion Coaching are as follows:
- Be aware of your child’s emotion
- Recognize your child’s expression of emotion as an opportunity for intimacy and teaching
- Listen with empathy and validate your child’s feelings
- Help your child learn to label their emotions with words
- Set limits when you are helping your child to solve problems or deal with upsetting situations appropriately
Much like turning towards within a partnership, you can recognize the bids of children and respond to them to create and secure emotional connections. This means being interested in what they are saying or doing and listening to understand. Validate their feelings and emotions. Ask questions. Be the support they need.
The Gottman Institute’s Editorial Team is composed of staff members who contribute to the Institute’s overall message. It is our mission to reach out to individuals, couples, and families in order to help create and maintain greater love and health in relationships.
Because facial expressions of emotion are part of our evolutionary history and are a biologically innate ability, we all have the ability to read them. It is an ability that gets better on the job in our everyday lives. This is especially true for macroexpressions. But most people are not very good at recognizing micro or subtle expressions. The average accuracy rates for people prior to training in Matsumoto & Hwang’s (in press) study was 48% if joy and surprise – the two easiest expressions to see – are excluded, then that accuracy rate drops to 35%. And there are many individual differences. Fortunately, as mentioned above, tools have been developed to help people improve their skills regardless of what level of natural ability they have. Thus if one is in a profession where the ability to read facial expressions of emotion – especially micro and subtle expressions – may help one be more efficient or accurate, then there are resources available to do so.
But the improved ability to read facial expressions, or any nonverbal behavior, is just the first step. What one does with the information is an important second step in the process of interaction. Being overly sensitive to nonverbal behaviors such as microexpressions and other forms of nonverbal leakage can be detrimental to interpersonal outcomes as well, as discussed in the literature on eavesdropping (Blanck, Rosenthal, Snodgrass, DePaulo, & Zuckerman, 1981 Elfenbein & Ambady, 2002b Rosenthal & DePaulo, 1979). Individuals who call out other’s emotions indiscriminately can be considered intrusive, rude, or overbearing. Dealing effectively with emotion information about others is also likely to be a crucial part of the skill set one must have to interact effectively with others. Knowing when and how to intervene, to adapt one’s behaviors and communication styles, or engage the support and help of others, are all skills that must be brought into play once emotions are read.
Multi-layer affective computing model based on emotional psychology
The factors and transforms of affective state were analyzed based on affective psychology theory. After that, a multi-layer affective decision model was proposed by establishing mapping relation among character, mood and motion. The model reflected the changes of mood and emotion spaces based on different characters. Experiment showed that human emotion characteristics accorded with theory and law, thus providing reference for modeling of human–computer interaction system.
This is a preview of subscription content, access via your institution.
Would You Pull the Trolley Switch? Does it Matter?
Is Human Morality a Product of Evolution?
Why Can't We All Just Get Along? The Uncertain Biological Basis of Morality
Davis: You describe moral decision-making as a process that combines two types of thinking: “manual” thinking that is slow, consciously controlled, and rule-based, and “automatic” mental processes that are fast, emotional, and effortless. How widespread is this “dual-process” theory of the human mind?
Greene: I haven’t taken a poll but it’s certainly—not just for morality but for decision-making in general—very hard to find a paper that doesn’t support, criticize, or otherwise engage with the dual-process perspective. Thanks primarily to Daniel Kahneman [the author of Thinking, Fast and Slow] and Amos Tversky, and everything that follows them, it’s the dominant perspective in judgment and decision making. But it does have its critics. There are some people, coming from neuroscience especially, who think that it’s oversimplified. They are starting with the brain and are very much aware of its complexity, aware that these processes are dynamic and interacting, aware that there aren’t just two circuits there, and as a result they say that the dual-process framework is wrong. But to me, it's just different levels of description, different levels of specificity. I haven't encountered any evidence that has caused me to rethink the basic idea that automatic and controlled processing make distinct contributions to judgment and decision making.
Davis: These neural mechanisms you describe are involved in making any kind of decision, right?— the brain weighs an emotional response with a more calculated cost-benefit analysis whether you’re deciding whether to push a guy off a bridge to save people from a runaway train, or trying not to impulse buy a pair of shoes.
Greene: Right, it’s not specific to morality at all.
Davis: Does this have implications for how much we think about morality as special or unique?
Greene: Oh, absolutely. I think that's the clearest lesson of the last 10 to 15 years exploring morality from a neuroscientific perspective: There is, as far as we can tell, no distinctive moral faculty. Instead what we see are different parts of the brain doing all the same kinds of things that they do in other contexts. There’s no special moral circuitry, or moral part of the brain, or distinctive type of moral thinking. What makes moral thinking moral thinking is the function that is plays in society, not the mechanical processes that are taking place in the brain when people are doing it. I, among others, think that function is cooperation, allowing otherwise selfish individuals to reap the benefits of living and working together.
Davis: The idea that morality has no special place in the brain seems counterintuitive, especially when you think about the sacredness surrounding morality in religious contexts, and its association with the divine. Have you ever had pushback—people saying, this general-purpose mechanical explanation doesn’t feel right?
Greene: Yes, people often assume that morality has to be a special thing in the brain. And early on, there was—and to some extent there still is—a lot of research that compares thinking about a moral thing to thinking about a similar non-moral thing, and the researchers say, aha, here are the neural correlates of morality. But in retrospect it seems clear that when you compare a moral question to a non-moral question, if you see any differences there, it’s not because moral things engage a distinctive kind of cognition instead, it’s something more basic about the content of what is being considered.
Davis: Professional ethicists often argue about whether we are more morally responsible for the harm caused by something we actively did than something we passively let happen—like in the medical setting where doctors are legally allowed to let someone die but not to actively end the life of a terminally ill patient, even if that’s their wish. You’ve argued that this “action-omission distinction” may draw a lot of its force from incidental features of our mental machinery. Have ideas like this trickled into the real world?
Greene: People have been making similar points for some time. Peter Singer, for example, says that we should be focused more on outcomes and less on what he views as incidental features of the action itself. He’s argued for a focus on quality of life over sanctity of life. Implicit in the sanctity-of-life idea is that it’s ok to allow someone to die, but it’s not ok to actively take someone’s life, even if it’s what they want, even if they have no quality of life. So certainly, the idea of being less mystical about these things and thinking more pragmatically about consequences, and letting people choose their own way—that, I think, has had a very big influence on bioethics. And I think I’m lending some additional support to those ideas.
Davis: Philosophers have long prided themselves on using reason—often worshipped as a glorious, infallible thing—not emotion, to solve moral problems. But at one point in your book, Moral Tribes, you effectively debunk the work of one of the most iconic proponents of reason, Immanuel Kant. You say that many of Kant’s arguments are just esoteric rationalizations of the emotions and intuitions he inherited from his culture. You’ve said that his most famous arguments are not fundamentally different from his other lesser-known arguments, whose conclusions we rarely take seriously today—like his argument that masturbation is morally wrong because it involves “using oneself as a means.” How have people reacted to that interpretation?
Greene: As you might guess, there are philosophers who really don’t like it. I like to think that I’ve changed some people's minds. What seems to happen more often is that people who are just starting out and confronting this whole debate and set of ideas for the first time, but who don’t already have a stake in one side or the other and who understand the science, read that and say, oh, right, that makes sense.
Davis: How can we know when we’re engaged in genuine moral reasoning and not mere rationalization of our emotions?
Greene: I think one way to tell is, do you find yourself taking seriously conclusions that on a gut level you don’t like? Are you putting up any kind of fight with your gut reactions? I think that’s the clearest indication that you are actually thinking it through as opposed to just justifying your gut reactions.
Davis: In the context of everything you’ve studied, from philosophy to psychology, what do you think wisdom means?
Greene: I would say that a wise person is someone who can operate his or her own mind in the same way that a skilled photographer can operate a camera. You need to not only be good with the automatic settings, and to be good with the manual mode, but also to have a good sense of when to use one and when to use the other. And which automatic settings to rely on, specifically, in which kinds of circumstances.
Over the course of your life you build up intuitions about how to act, but then circumstances may change over the course of your life. And what worked at one point didn’t work at another point. And so you can build up these higher-order intuitions about when to let go and try something new. There really is no perfect algorithm, but I would say that a wise mind is one that has the right levels of rigidity and flexibility at multiple levels of abstraction.
Davis: What do you think about the potential for specific introspective techniques—I’m thinking about meditation or mindfulness techniques from the Buddhist tradition—to act as a means of improving our own moral self-awareness?
Greene: That’s an interesting connection—you’re exploring your own mental machinery in meditation. You’re learning to handle your own mind in the same way that an experienced photographer learns to handle her camera. And so you’re building these higher-order skills, where you’re not only thinking, but you’re thinking about how to think, and monitoring your own lower-level thinking from a higher level—you have this integrated hierarchical thinking.
And from what I hear from the people who study it, certain kinds of meditation really do encourage compassion and willingness to help others. It sounds very plausible to me. Tania Singer, for example, has been doing some work on this recently that has been interesting and very compelling. This isn’t something I can speak on as an expert, but based on what I’ve heard from scientists I respect, it sounds plausible to me that meditation of the right kind can change you in a way that most people would consider a moral improvement.