Rule of Commitment and Social Norms

The Principle of Commitment and Behavioral Consistency

Rule of Commitment and Social Norms

When the last of the New Year’s confetti settles, the New Year’s resolutions begin. People decide they are going to be healthier, richer, wiser by starting some new, positive habit — a phenomenon known as the “fresh start effect.”

This “honeymoon” phase rarely lasts, though. The gym traffic eventually slows, productivity and budgeting dwindles, and most of us return to our bad habits, despite our best efforts. But what if I told you that it is possible to influence behavior — to nudge people in the direction of sticking to their goals?

In his book Influence: Science and Practice, Robert Cialdini identifies six principles of influence:

In this article we discuss the last principle in this list —  that of commitment and consistency.

Definition:Behavioral consistency refers to people’s tendency to behave in a manner that matches their past decisions or behaviors.

Behavioral consistency is a judgement heuristic to which we default in order to ease decision making: it is easier to make one decision, and stay consistent to it, than it is to make a new decision every single time we are presented with a problem. From an evolutionary standpoint, behavioral consistency also serves us well: in a social environment, unpredictable people are less ly to be d and to thrive among others.

In Cialdini’s research, he found that not only will people go their way to behave consistently, they will also feel positively about being consistent with their decisions, even when faced with evidence that their decisions were erroneous.

Behavioral consistency acts at both the individual and the social level. As Cialdini states, “Once we have made a choice or taken a stand, we will encounter personal and interpersonal pressures to behave consistently with that commitment. Those pressures will cause us to respond in ways that justify our earlier decision.”

Let’s say you make a New Year’s resolution to go to the gym three times a week at 6:00 am. Once you have made that decision, you will feel compelled to stick with it. This is the individual level — the pressure to keep a promise made to oneself.

  This pressure will be even stronger at the social level — that is, if the promise is public and involves others:  if you and a friend both agree to meet at the gym in the mornings, you will feel more committed to your resolution and more ly to follow through.

  While inconsistency with ourselves might result in some guilt, inconsistency with others results in interpersonal risks. Inconsistency is seen as an undesirable trait and is associated with irrationality, deceit, and even incompetence; it can generate reactions of disappointment, anger, and confusion.

These risks create tremendous social pressure to remain consistent. Thus, people will strive to ensure that their behavior matches past decisions in order to avoid undue stress.

Because of these personal and social pressures, if you make people commit to something, it’s ly that they will try to follow through. The commitment does not have to be a large one. In fact, it’s often a small decision.

In the case of our fitness resolution, it could easily have been an off-hand, seemingly inconsequential comment made over a glass of New Year’s Eve champagne. Or, in the case of my parents, it was a dismissive response to my pleas for a dog. My father would reply, “Maybe when you’re older” in hopes I would forget.

I nagged further, “How much older?,” and eventually my mother caved and said, “Maybe when you’re ten years old.” Every year thereafter I counted down the years remaining until they owed me a dog.

They kept true to their word, and, much to their chagrin, when I turned ten years old, I was the proud owner of a spaniel named Toby. He lived for 11 incredible years. And in a hilarious (but predictable) turn of events, my parents loved him.

Examples of Commitment & Consistency in UX

To take advantage of behavioral consistency, get your users to make an initial commitment to an activity you want them to engage in. The initial commitment you propose to the user has to be:

Let’s look at the workflow of writing a review for Yelp.com. Because people can start writing a review without an account, this process seems both low-stakes (no personal data is shared with the organization) and easy (no work is required to create an account before reviewing, and the interaction to rate a business and to review requires only one click).

But as soon as the user begins typing the review, the form field presents motivational microcopy: Keep those fingers rolling, you wizard of words.

What seems inconsequential text is actually an expert use of behavioral consistency: it reminds the user to stick to the commitment of writing a review and encourages her to make the additional commitment of writing even more.

Once people have finished their review, they are asked to commit (again) even more by creating an account. In fact, the review won’t be saved and posted unless they do so. ­This sequence of actions takes advantage of users’ previous commitment to post the review and of their aversion to data loss (“Well, I’ve already done all this work, I might as well sign up and not lose it.”).

Finally, once the review is posted, an invitation to write another one is displayed (That was fun. Let’s write another). The wording reminds users that a) they just wrote a review, and b) they can be consistent with their behavior and write more reviews for other businesses.

Yelp.com expertly uses microcopy to encourage users at various steps in the content-creation process.

When the Fitbit mobile app is first launched, it asks users to state their fitness goals. This task does not involve disclosing a lot of private information and is relatively effortless, since Fitbit’s target audience is made of people interested in tracking their health-related habits.

Once these goals have been entered, they act as a commitment and are displayed on the user’s dashboard, together with the user’s progress toward the goals. This visual representation (combined with push notifications) serves as a reminder of the user’s commitment to these goals and makes it more ly that they will be accomplished.

Moreover, the commitment can be made public to a group of friends or health enthusiasts, thus allowing users to raise personal promises to a social level.

Fitbit uses visuals of past behavior paired with motivational copy to encourage users to a) wear their Fitbit more, and b) log their activity more often.

23andMe, a service which tests DNA to reveal genetic ancestry and health insights, uses the principle of commitment to encourage visitors to participate in questionnaires that gather data for its ongoing genetic research.

23andMe first presents users a short preliminary health survey pertaining directly to their personal genetic report. This short survey is low-effort and low-stakes: it requires only a few minutes and is part of the service the user is paying for in the first place.

Once this short mandatory survey is complete, the system shows the user how many questions she’s answered as an initial milestone.

Then it provides the option to answer more survey questions by showing a link labeled Continue answering — to give people the opportunity of keeping their commitment by providing even more data.

A progress bar indicates how many questions are unanswered. And the desktop site uses social proof to further motivate respondents by showing where they rank compared to other 23andMe participants. If you’re as competitive as I am, you can reach 357 questions and still be disappointed that 29% of the 23andMe participants answered more questions than you.

23andMe uses milestones, progress visuals, and the relatively small commitment of filling basic personal data to motivate more intense and involved survey participation from genetic-testing participants.

Implications for UX Practitioners & Stakeholders

Many mobile and websites apps require users to create an account before using the site. Often, this request backfires and does not create the effect the designers (or stakeholders) intended.

Even when this commitment is easy to make (thanks to social media and Google integrations), it is not a low-stakes interaction — personal information and accidental commitment to long-term relationships are at stake.

It takes time to build a relationship of experiential trust before a user feels comfortable sharing personal information.

Effectively facilitating and taking advantage of behavioral consistency is not just a matter of witty microcopy (though that can help) —  it is a matter of interaction design.

 It requires UX practitioners to have a good understanding of commitment levels, prospect theory & loss aversion. It begs the analysis of the decision architecture of each workflow.

In particular, here are some questions that practitioners must answer in order to reveal how much cognitive load, cost, and trust is required for each decision point in the interaction, from start to finish.

  • How many choices does the user have to make at each step?
  • How much information does the user need to make that decision?
  • What are the default choices?
  • What is the interaction cost and sometimes the monetary cost of that decision?

It also requires stakeholders and executives to reassess current customer-acquisition strategies, and maybe even business models to understand what a low-stakes decision means at various stages in the user journey.

Organizations need to consider the level of trust that the user is at: at the beginning of a relationship, when trust has not yet been established, even a small request such as entering an email address can be perceived as high-stakes, but as the trust in the organization increases, high-stakes demands can become minor.

Another factor that affects what low-stakes means is the perceived value of the organization’s offerings. A company with a strong brand and clear offerings can afford imposing some higher-stakes commitments to their users even at the beginning of the relationship.

As you’re designing the interaction flow, remember to offer to users commitments that match their trust in your organization and their perceived value of your products or services. And focus on minimizing the interaction cost associated with these commitments: even an inconsequential request is ly to be rejected if it requires a lot of work. 

Conclusion

Commitment and consistency are powerful motivators to increase engagement and persuade users to fulfill their goals. Designs which allow users to make a small, low-cost commitment will be more ly to convert customers than ones that make commitment a costly process.

An all-or-nothing design will deliver nothing from most users.

There are many questions that need to be answered to ensure we meet user needs at every step of the decision-making process, but it all boils down to facilitating the trust of our users, and increasing the usability and the perceived value of our products and services.

Reference

Robert B. Cialdini, Influence: Science and Practice. Pearson Education Inc., 2009.

Источник: https://www.nngroup.com/articles/commitment-consistency-ux/

Peer punishment promotes enforcement of bad social norms

Rule of Commitment and Social Norms

Peer punishment promotes enforcement of bad social norms

Social norms are an important element in explaining how humans achieve very high levels of cooperative activity. It is widely observed that, when norms can be enforced by peer punishment, groups are able to resolve social dilemmas in prosocial, cooperative ways.

Here we show that punishment can also encourage participation in destructive behaviours that are harmful to group welfare, and that this phenomenon is mediated by a social norm.

In a variation of a public goods game, in which the return to investment is negative for both group and individual, we find that the opportunity to punish led to higher levels of contribution, thereby harming collective payoffs.

A second experiment confirmed that, independently of whether punishment is available, a majority of subjects regard the efficient behaviour of non-contribution as socially inappropriate. The results show that simply providing a punishment opportunity does not guarantee that punishment will be used for socially beneficial ends, because the social norms that influence punishment behaviour may themselves be destructive.

Moral, social and legal norms are crucial in sustaining the very high level of cooperation with non-relatives that is observed in human societies. Norms involve a commitment to behave in conformity with a rule, conditional on sufficiently many others sharing the commitment to that rule.

They also may involve a commitment among some norm followers to punish transgressions of the norm1,2,3,4. Punishment is ly to be especially important for the maintenance of norms that arise in social dilemmas, where there are conflicts between individual and collective interests.

Economics experiments using public goods games show that providing subjects with the opportunity to inflict punishment in a social dilemma promotes higher levels of cooperation1.

Although the losses created by punishment sometimes outweigh the gains of cooperation1, 5,6,7, coordination of punishment8 and also longer periods of repeated interaction1, 9 make it very ly to be that cooperation will spread and benefit the group overall.

In these cases, punishment is ly to be used to promote norms of fairness, which have prosocial effects. Punishment is used to increase social welfare even at a personal cost to the punisher; hence, the term ‘altruistic punishment’ has been coined to describe these phenomena1.

Not all norms, however, are socially beneficial. For example, in some cultures, norms of sexual purity motivate punishment of transgressions, including so-called ‘honour killings’ of rape victims10,11,12.

Cultures of honour subscribe to norms that require violent retaliation for trivial slights, often leading to devastating escalations of violence13.

Even apparently benign norms of gift-giving have been identified as responsible for costly inefficiencies, amounting to billions of dollars in annual deadweight loss14.

This study aims to investigate whether punishment will be employed to establish socially costly norms in a paradigm that resembles earlier public goods experiments. In typical public goods games, some subjects appear to hold normative attitudes that require positive, equal contributions from all members.

These attitudes are ly to combine elements of fairness (‘we should all contribute equally’) and benevolence (‘by contributing I/we make others better off’). We vary the standard public goods design, however, by setting the social benefit from contributing to be zero or negative.

That is, although one individual’s contributions will benefit the remainder of the group, they do not make the group better off as a whole. In this setting, normative attitudes that require positive contributions will be potentially harmful to the collective.

We hypothesise that providing subjects with the opportunity to punish in such games will allow these potentially damaging norms to influence behaviour much more than they would in the absence of punishment.

Thus, groups provided with punishment opportunities will contribute more and this outcome will be mediated by the normative attitudes, held by at least some subjects, requiring positive contributions.

As existing studies have used a paradigm in which cooperation was beneficial, it is unknown whether punishment can be used to elicit destructive behaviour. On the one hand, it has been suggested theoretically that if punishment is sufficiently potent, it can institute any norm, no matter how foolish15.

On the other hand, it is also thought that psychological propensities to adopt and enforce norms have been subject to significant evolutionary pressures, suggesting there may be significant limitations on the range of possible norms16.

Although some existing evidence suggests that ‘altruistic’ punishment has a dark side6, 17,18,19,20, our study is the first to provide experimental evidence that subjects will enforce a destructive norm with punishment.

In a (linear) public good game, players are endowed with a number of monetary units (MU), which they can either keep or invest. Invested monies are multiplied by a factor called the marginal per-capita return (MPCR) and every player in the group receives the multiplied amount.

If the MPCR is between 1/ n and 1, with n being the number of players, there is a social dilemma, where the individual dominant strategy is to keep all one’s endowment, while social welfare is maximised if everybody invests all.

This is the environment where punishment has proven to be effective to enforce cooperation in previous experiments.

In this study we remove the social dilemma aspect from the game. In two treatments (P25 and P20) we set the MPCR to 0.25 and 0.2, respectively, where n = 4.

In the former, there is no welfare gain from investments; in the latter, investments actually lower the group payoff: every dollar contributed leads to a payoff of $0.80, divided equally between the four members.

In two baseline treatments (N25 and N20), subjects play the same games, but without a punishment opportunity.

We find that punishment significantly increases contributions, in both the neutral and the destructive environments.

A second experiment demonstrates that subjects have shared attitudes that it is appropriate to contribute more than five units and inappropriate to contribute zero.

This finding suggests that punishment in the first experiment is being used to enforce a social norm, even though in the present environment the effect of that norm is harmful, or at best neutral, with respect to group payoffs.

Eighty-three 116 subjects (72%) in punishment treatments punished at least once. As hypothesised, contributions were higher in the punishment treatments than in the controls. In P25, the average contribution per subject, per round was 5.

6 MU; in N25, without punishment, it was 1.6. In P20, the average contribution per subject, per round was 3.4; in N20, without punishment, it was 0.7. Both differences are statistically significant (one-sided, Fisher’s two-sample randomisation test21, P25 vs. N25: p = 0.

002, P20 vs. N20: p = 0.007).

We report in Table 1 average punishments dispensed and received by subjects who contributed more or less than the mean amount contributed on a given round by their three co-players.

Consistent with earlier findings, most punishment is dispensed by high contributors and most punishment is received by low contributors1, 22.

A regression model, described in Supplementary Table 1, supports this result.

Table 1 Average amounts of punishment per round dispensed and received by high and low contributors

In Fig. 1, we report time series of contributions for each of the four treatments. Contributions start out high, but in the non-punishment treatments they decay significantly. Similar to other public goods experiments, we find that punishment stabilises contributions at a significantly higher level22, 23.

We also observe, consistent with earlier findings23, that higher rates of return lead to higher contributions (Fisher’s two-sample randomisation test, one-tailed p = 0.066 across the treatments without punishment, p = 0.071 across the treatments with punishment).

Attempts to determine whether the mere threat of punishment was sufficient to maintain high contributions were inconclusive (Supplementary Fig. 1).

Fig. 1

Time series of mean aggregate group contributions per round. a MPCR = 0.2, P20 treatment with punishment (orange), N20 without punishment (blue); b MPCR = 0.25, P25 treatment with punishment (orange), N25 without punishment (blue). (N20, n = 15, P20, n = 15; N25, n = 14; P25, n = 14)

Earnings

Contributions were destructive in the MPCR = 0.2 treatments and so because contributions were higher in P20 than in N20, group earnings were lower. Net of any expenditure on punishment, subjects in the punishment treatment earned 10.8 MU less on average over the course of the experiment.

Normative attitudes

One possible explanation for punishment behaviour—that subjects engaged in retaliatory counter-punishment for having been punished in earlier rounds—was precluded by the design of our experiment.

Subjects were not advised who had punished them on any given round and subject identifiers were randomly assigned every round, making it difficult to identify who was responsible for past punishment received.

A post-experiment survey asked subjects to explain why they penalised other players (first person punishment) and to indicate why they believed others may have penalised them (second person punishment).

In both punishment treatments, there was negligible evidence of retaliatory motives and the reasons most commonly cited were reasons relating to fairness and increasing contributions, as opposed to personal benefit or spite, consistent with our hypothesis that normative motivations were a significant factor (see Fig. 2, also Supplementary Table 2 and Supplementary Fig. 2).

Fig. 2

Relative frequency of reasons cited to explain punishment.

First person reasons are responses to: ‘What was the main reason that you deducted points from the other members in your group?’ Second person reasons are responses to: ‘What do you think is the main reason that others may have deducted points from you?’ Solid arrows point to reasons cited significantly more frequently than the alternative, at the group level (two-tailed binomial test; red arrows p

Источник: https://www.nature.com/articles/s41467-017-00731-0

Psychologydo
Добавить комментарий

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: