Despite calls to fund reproducibility studies, resources would be better spent on developing tools that enable efficient collection and sharing of experimental protocol details and metadata to enable study comparisons.
At a meeting of the American Society of Cell Biology in 2012, I sat in a packed meeting room. The speaker was Glenn Begley, author of a new article reporting that the results from dozens of academic research papers had failed to reproduce after considerable efforts in his company’s labs. And these weren’t just any research results—they were apparent breakthroughs published in prominent research journals.
The feeling in the room was that this was evidence of an emergency of epic proportions, a crisis of irreproducibility in science. Discussion turned to the idea that perhaps a third party should be required to check the reproducibility of all studies before the results get published. The line to ask questions was long, so I kept my seat. Finally, a colleague for whom I had great respect as a leader in the field asked the rhetorical question that was on my mind: “And who checks the checkers?” In other words, “reproducible” is not the same as “correct.” Maybe the “reproducers” would get it wrong. Or maybe both studies were flawed. Or maybe there were enough uncontrolled variables that the two studies were actually performing different experiments… Continue reading.
AIMBE

