In this interview with a Berkeley scientist the interviewer asked,
“We hear that science should be more reproducible. What exactly does that mean?”
The scientist’s reply?
“The problem is that this does not have an easy answer.”
This scientist was reacting to the voluminous and often conflicting language on scientific reproducibility over the past decade which has made the answer to this question seem elusive and difficult. But in fact, there is a very clear answer to “what is reproducibility?”—one that has been thoroughly developed by brilliant minds over the past 70+ years, and routinely practiced around the world every day.
The theories and practices for performing reproducible "anything" include the well-developed fields of Measurement Systems Analysis, Statistical Process Control, Design of Experiments and Lean. They are often packaged as toolkits in the form of well known methodologies including Six Sigma, DMAIC, and Quality By Design. And the literature describing these methods is extensive. In fact, there is so much that it becomes intimidating to someone who just wants to take a few practical steps to improve the quality and reusability of their research.
The good news is that you can pretty much pick any of these practices or toolkits as a starting point and you'll make rapid advances. You don't need to study them all — the important thing is to just get going on something. It is well worthwhile to learn and apply these methods. It will assuredly improve your pace of discovery and your collaboration effectiveness as it has for many people and industries over the past decades.
To begin your journey, we recommend the following initial learning path and references:
(1) Three quick reads to motivate why and how:
Your science is only as good as your ability to measure outcomes. Why and where should effort be placed in measurement system development?
If you test tens or hundreds of samples in a single experiment, it’s guaranteed that you’ll find many false positive results, in every experiment. So what can you do about it?
Researchers often take assay data at face value, assuming it’s accurate because it came out of an instrument. Here we explain why that can be dangerous, and how assay validation can make your science faster and better.
(2) George C. Runger, Douglas C. Montgomery (2013) Applied Statistics and Probability for Engineers, 6th Edition, John Wiley & Sons.
An all-in-one treatment providing a practical introduction to probability and statistical testing, design of experiments, and statistical process control. It is richly developed with motivating examples, illustrations, and both practical and deeper theory. It is presented at mathematical level appropriate for applied scientists and engineers.
(3) Douglas C. Montgomery (2012) Statistical Quality Control, 7th Edition, John Wiley & Sons.
A deep dive into the methodologies of statistical process control — adding more details and methodologies not covered in Runger and Montgomery above. You may not ever need to go here except as a reference.
If you want to get your hands on real data and real practice of these methods, I suggest you try some of the wonderful interactive statistical software packages out there. Some are free (like R, or SciPy) but tend to require much more expertise and programming skill. Commercial software applications like JMP or Minitab are often a better place to start because they are point-and-click easy and extraordinarily powerful. My personal recommendation is to try JMP software as an interactive way to explore these methods. It has wonderful tutorials that get you rolling and doing and seeing without being a math expert. Playing with these methods using software helps make them “real", and will greatly improve understanding. If you are a student or postdoc, you can usually get very low-cost access to JMP through your university.
You can also give Riffyn Nexus a try as a system to capture your experiment designs, assay processes and experimental data in form ready for reproducible science. Riffyn Nexus provides a visual “blueprint” of your scientific methods, and automatically prepares your data for statistical analysis and data mining. Riffyn Nexus blueprints are one realization of the recently proposed idea of “preproducible” scientific reports. It also integrates with JMP, Spotfire, Tableau, R, Python, Jupyter, and just about any other scientific software, allowing scientists to easily assess their results using an open ecosystem of computational tools. It's free for individuals via our Open Access program.
As professional innovators, we tend to assume every new challenge exposes an untrodden path. But that is often not the case, particularly so when it comes to reproducible science. We just need to open our hearts to realize that keys to the future may actually lie in the past. Combining the long-standing principles of measurement systems analysis with modern digital tools can turbo-charge our research, increase the quality scientific data, and elevate the integrity of scientific outcomes.
If you haven’t used these methods before, there is no time like the present to give them a try!
Tim Gardner is the Founder and the CEO of Riffyn. He was previously Vice President of Research & Development at Amyris, where he led the engineering of yeast strain and processes technology for large-scale bio-manufacturing of renewable chemicals. Tim has been recognized for his pioneering work in Synthetic Biology by Scientific American, the New Scientist, Nature, Technology Review, and the New York Times. He also served as an advisor to the European Union Scientific Committees and the Boston University Engineering Alumni Advisory Board. Tim enjoys hockey, running, mountain biking, and being beaten by his sons in almost everything.