Study Design III

The study we’ll be examining is

Levine, E. E., & Schweitzer, M. E. (2014). Are liars ethical? On the tension between benevolence and honesty. Journal of Experimental Social Psychology, 53, 107-117.

One of the first items of note is that if you read this paper, you’ll find that it contains 3 studies. This is not unusual of research such sciences. So what were these studies? Well, the researchers were interested in whether people who behave selflessly tend to do so because they want to look good, not because they’d do the same thing if nobody was watching. Like many experiments in this area, the researchers used game theory to help design their experiments. Specifically, the used what they called The Number Game (a variation of another “game” used in such research called The Deception Game).

The actual game rules involve two players, Sender and Receiver. The Sender is told a number (1, 2, 3, 4, or 5) and sends it to the Receiver. The Receiver than reports the number, knowing only what the Sender told them and that the Receiver’s reported number determines the payoff. In study one, participants watched one of two possible fake games (they didn’t know the games were fake). In both games, the payoff for the Sender was $2 if the Sender told the truth (sent the number they were told), and the Receiver got 0. However, if the Sender lied, the payoff for the Sender was $1.75 and for the Receiver was $1. In other words, by telling what the researchers called an “altruistic lie”, the Sender’s dishonesty meant the Receiver received money and the Sender received only slightly less than they would had they told the truth.

One group of participants watched a game in which the Sender told the truth, and the other group watched one in which the Sender told an “altruistic lie.” All participants were then asked to rate the Sender’s honesty, benevolence, and moral character (using a seven point scale; i.e., ratings went from 1= “Not at all [honesty/benevolent/moral” to 7=”Extremely [ditto]”). The researchers found that participants who watched the lying condition rated the Sender as more moral than did participants who watched the honest Sender.

Study 2 was quite similar, but involved extra conditions. First, the researchers added two more fake “games”. In one, the Sender could lie and receive $2 while the receiver got nothing, while in the other the Sender could tell the truth and get $1.75 while the Receiver got $1. Additionally, all games involved a 25% chance that the message “sent” would be over-ridden by the computer sending it. This allowed the researchers to compare responses from participants who e.g., “observed” the Sender trying to lie to get the $2 but the lie failed because the computer sent the true number, and the same thing for the honest Sender (in both payoff structures; the one from study 1 and the added payoff condition for this study). In other words, now the researchers could examine intentions compared to results (because, in the real world, just because somebody is dishonest so that they will come out on top doesn’t mean their lie will work). The researchers were careful to ensure participants understood the game by asking questions to test rule comprehension. Participants again were asked to rate the Sender and the researchers found that intention didn’t seem to matter.

So far, so good. The researchers were careful ensure that the participants knew what to do, the games observed by the participants do allow the participants to actually rate moral character, they included other variables that could be used to single out specific aspects of moral character (e.g., honesty wasn’t just compared with moral character but also benevolence, and even more in study 2), and so on.

The last study had a problem. Normally, in this kind of research you want a control group. The proto-typical control group (the one most people are familiar with) comes from medical research: the placebo group. Of course, in most research medicine isn’t involved, but often enough the control group is still a group that didn’t receive some treatment (e.g., researchers might want to compare performance on a test given a training course, and the control group would just take the test without training). Here, the researchers used that kind of logic by having one condition in study 3 in which the payoffs were the same regardless of whether the Sender lied or told the truth. The problem, however, is that this isn’t a control.

Why isn’t it a control? Because the entire study design revolves around whether dishonesty can be judged as moral if it is for the “right reasons”, so in all 3 studies all participants had to rate the Sender along the same dimensions (moral character, honesty, etc.). In other words, the rating forces the participant to think about the Sender in moral terms. However, in this “control” condition the Sender has absolutely no reason to lie. If the Sender tells the truth, then the participants who watched this game have to rate the Sender in terms of moral character when all the Sender did was follow the rules. If the Sender lies, then participants have to rate somebody who seems to be a pathological liar. Either way, participants aren’t rating this Sender the same way Senders in other groups from study 3 are rated. If the Sender told the truth, they have to rate the morality of someone who just did what they were told and got the money as promised, and thus provides no basis for judgment (they were honest, but honesty meant money and thus there was no motivation to lie either to get more money or to have the Receiver get money). Basically, the honest Control Sender doesn’t give the participants anything with which to really determine whether they are moral or not, while the lying Control Sender isn’t just lying but doing so for no reason and thus can’t be compared to the honest Control Sender; finally, neither can be accurately compared with the other conditions in study 3 (or the other two studies) because participants aren’t rating the same kind of behavior.

The results support this conclusion. The Control Sender who lied received an average honesty rating close to the middle (around 3 on the 1-7 scale). The one who told the truth received a 5, which is unusual given that there was nothing dishonest in this Sender’s actions. Additionally, the Control liar who lied just for the sake of lying was judged to be only slightly less moral than the Control Sender who told the truth. This suggests that participants couldn’t figure out how to evaluate the Control condition.

There is one other problem with study 3, but I think I’ve described enough detail about the kinds or factors that go into design and how they can go wrong to stop now.

Advertisements
This entry was posted in The Scientific Method and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s