Your Assay Results May Be Wrong

There’s a weird kind of hypnotism that happens in scientific labs when using an instrument to measure something. Researchers often take the data at face value. But that’s dangerous — the data are not assured to be accurate just because they come from an instrument.

The problem is that your assay is not just the instrument at the end, it’s your entire experimental process from the moment you start your reaction or cell culture, to the first sampling event, to the assay kits you apply, to the fancy instrument at the very end. Every step along the way contributes to potential error.

To be honest, making a good measurement is a lot like making a good macaron — it’s a super-simple recipe on paper, but really hard to get right. It takes practice, assessment, and tuning. And like baking a macron without any practice, running an unqualified assay is almost certainly going to result in bad measurements. To be clear: the scientific definition of “bad” is measurements that do not have the precision and accuracy needed for your decisions or hypothesis test.

After 20 years in biotech R&D, I’ve seen bad assays get used by far too many people, for far too long, with far too much negative impact to scientific progress.

Here is one such example. A scientific team decided to run a screening experiment on 30 biological samples they had engineered. Since it was a new assay process, they chose to replicate each sample measurement across 8 wells in a microtiter plate.  This is what they got:

 
 Data analysis in JMP® using data captured, annotated and integrated with Riffyn SDE™

Data analysis in JMP® using data captured, annotated and integrated with Riffyn SDE™

 

Does that look right to you? No it does not. What are the chances that all the high results would cluster in plate 1 the way they have? About 1 chance in as many atoms as there are on Earth. And look at the spread on those replicates — the error is almost 50% of the mean value. (The points each column are replicates of one sample.) That is not good.

As bad as these results are at least the extensive replication provided some valuable quality information. Imagine if you had not done all those replicates, or if you had only measured one microtiter plate of samples that day. How would you have known if there was a problem? You may have just taken the results at face value and wasted a lot of time on further experiments based on those results. Unfortunately, this is often what happens in labs around the world, assays are not adequately replicated or qualified before they are used. And this issue is at the core of the oft-mentioned reproducibility problems in science.

But fortunately, this particular team had the quality data, and consequently did not trust their results. They decided to investigate the assay. They re-ran the assay with a dose-response curve at seven known concentrations with twelve replicates each. To assess if there were problems with the microplate reader itself, they also scanned each replicate four times at four different positions in the well.

What did they find? First, they resolved that there were no issues with the plate scanning — no variability there — the four repeat scans were practically identical. But their overall results were a mess. The dose-response curve (below) was not even a monotonic relationship, meaning the assay value doesn’t always increase with increasing standard reagent dose (so two different concentrations could result in the same assay value). The green fit-line below shows the actual trend in the data, the red line is what the assay should have looked like if it gave a linear (and monotonic) result. 

 
 Data analysis in JMP® using data captured, annotated and integrated with Riffyn SDE™

Data analysis in JMP® using data captured, annotated and integrated with Riffyn SDE™

 

Even if the dose-response had been linear, the noise in the assay was excessive. The huge spread in replicates at each concentration means that one concentration is not resolvable from another. Digging deeper into the data with a variability chart below identifies the root cause. It shows that the assay wasn’t stable from well to well for replicates of the same dosing standard.

 
 Data analysis in JMP® using data captured, annotated and integrated with Riffyn SDE™

Data analysis in JMP® using data captured, annotated and integrated with Riffyn SDE™

 

Obviously there were some fundamental problems.  And this team went on to solve them before they did any more experiments. And that saved them a lot of wasted time and scientific heartache in the months to come.

The bottom line here is really simple. Just like everything in life, there is no free lunch. If you want an accurate assay, you have to make it so. And in your chase for the next scientific breakthrough, it’s worth your time to do so.


If you’d like some advice on how to make your own assay better, drop us a line at advice@riffyn.com.

 

Process and experiment design were created, and data were captured and prepared with Riffyn.
Plots were created with JMP® Software. Copyright 2018, SAS Institute Inc., Cary, NC, USA. All Rights Reserved. Reproduced with permission of SAS Institute Inc., Cary, NC.

Timothy Gardner