Experimental economics: Results you can trust

Reproducibility is an important measure of validity in all fields of experimental science. If researcher A publishes a particular scientific result from his laboratory, researcher B should be able to follow the same protocol and achieve the same result in her laboratory. However, in recent years many results in a variety of disciplines have been questioned for their lack of reproducibility. A new study suggests that published results from experimental economics—a field pioneered at Caltech—are better than average when it comes to reproducibility.

The work was published in the March 3 online issue of the journal Science.

"Trying to reproduce previous results is not glamorous or creative, so it is rarely done. But being able to get the same result over and over is part of the definition of what makes knowledge scientific," says Colin Camerer, the Robert Kirby Professor of Behavioral Economics at Caltech and lead author on the paper.

The study was based on a previous method used to assess the replication of psychology experiments. In the earlier technique, called the reproducibility project psychology (RPP), researchers replicated 100 original studies published in three of the top journals in psychology—and found that although 97 percent of the original studies reported so-called "positive findings" (meaning a significant change compared to control conditions), such positive findings were reliably reproduced only 36 percent of the time.

Using this same technique, Camerer and his colleagues reproduced 18 laboratory experimental papers published in two top-tier economics journals between 2011 and 2014. Eleven of the 18—roughly 61 percent—showed a "significant effect in the same direction as in the original study." The researchers also found that the sample size and p-values—a standard measure of statistical confidence—of the original studies were good predictors for the success of replication, meaning they could serve as good indicators for the reliability of results in future experiments.

"Replicability has become a major issue in many sciences over the past few years, with often low replication rates," says paper coauthor Juergen Huber of the University of Innsbruck. "The rate we report for experimental economics is the highest we are aware of for any field."

The authors suggest that there are some methodological research practices in laboratory experimental economics that contribute to the good replication success. "It seems that the culture established in experimental economics—incentivizing subjects, publication of the experimental procedure and instructions, no deception—ensures reliable results. This is very encouraging given that it is a very young discipline," says Michael Kirchler, another coauthor and collaborator from the University of Innsbruck.

"As a journal editor myself, we are always curious whether experimental results will replicate across populations and cultures, and these results from multiple countries are really reassuring," says coauthor Teck-Hua Ho from the National University of Singapore.

Coauthor Magnus Johannesson from the Stockholm School of Economics adds, "It is extremely important to investigate to what extent we can trust published scientific findings and to implement institutions that promote scientific reproducibility."

"For the past half century, Caltech has been a leader in the development of social science experimental methods. It is no surprise that Caltech scholars are part of a group that use replication studies to demonstrate the validity of these methods," says Jean-Laurent Rosenthal, the Rea A. and Lela G. Axline Professor of Business Economics and chair of the Division of the Humanities and Social Sciences at Caltech.

More information: "Evaluating replicability of laboratory experiments in economics," Science (2016). DOI: 10.1126/science.aaf0918

Journal information: Science

Citation: Experimental economics: Results you can trust (2016, March 3) retrieved 29 March 2024 from https://phys.org/news/2016-03-experimental-economics-results.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Prediction markets can help identify research results that are too good to be true

30 shares

Feedback to editors