Hacker News new | past | comments | ask | show | jobs | submit login
Not Every Drop of a Person’s Blood Is the Same, a Study Says (nytimes.com)
117 points by hampelm on Feb 24, 2016 | hide | past | favorite | 47 comments



Question from an outsider (of medicine and of medical research): Why is this study new? I mean I can understand why it is news (for the mass media)...but isn't this something that would've have been tested long ago to a certain degree? At least to a degree in which, today, doctors are content that a venous draw is a statistically useful amount of blood to derive health results from? Wouldn't that have to based on some study that purported to find the minimum volume of blood needed to reliably represent someone's health?

Not only does it seem like a very fundamental question to have already asked...it doesn't even seem like a very difficult study to do. It's not substantially longitudinal over time -- for every subject, you take several pinpricks, and run the tests. Or logistically difficult to manage.

So I get why it's news, in terms of Theranos and what not...but this has to have been something that was studied many times over many decades. Or is the NYT misinterpreting/signifying the significance, i.e. the Rice scientists found a previously undetectable kind of difference, but which is, yes, technically shows that blood drops are different?


Microfluidics as a field is relatively new (last 10 years or so).

This study is at the "microtiter" scale, which is the scale that companies such as theranos are trying to take advantage of.

Microfluidics is largely an industrial field rather than academic.

>Not only does it seem like a very fundamental question to have already asked...it doesn't even seem like a very difficult study to do.

Yeah, well, that's the difference between Silicon Valley and academia.

SV doesn't want to hear about results that invalidate their business model.


Hi, pathology resident here. Even with large volume venous blood draws, you can change the values quite substantially. one of my professors likes to tell the story how just by pumping his fist as is usually directed by The phlebotomist, he was able to change his potassium level from 4 to 5 to 6 to 6.5. This is the difference between normal and starting to worry about heart problems. so it is far from shocking at least to the experienced, that this is a problem for microfluidics.

This essentially equivalent to a sampling error problem. A large venous sample (10 mL) is enough to get a pretty good average. Microliters of blood from any given location in the body are likely to be different from microliters somewhere else in the body. I seriously doubt anyone in a clinical lab would be surprised by these results.


> SV doesn't want to hear about results that invalidate their business model.

I promise you that academics not wanting to hear results that invalidate their models is the rule, not the exception.


otoh, academics love hearing results that invalidate other people's models :)


As someone who's lived with health professionals my whole life ... if you're smart and have a background in science or engineering, you're only about two or three questions away from stumping health-care providers.

The level of understanding required to build a thing is very different from the level of understanding required to patch a somewhat broken system.


That's a good analogy, but it bears noting that the reason medical professionals only have patch-level knowledge is not for any ignorance or lack of inquisitiveness, but for sheer breadth and lack of source code. There's a reason that where we used to have doctors we now have so many sub specialties; There's just that much to learn and discover, because we don't have the blueprints.


Yep:

- What are the units on blood pressure? 120 of what?

- Would it make a difference if I held my arm up while the blood pressure was taken?


This is missing the forest for the trees a little bit. The actual unit in blood pressure is relatively unimportant; it's so conventional, that no-one produces general medical tools that measure in other units. What is far more important is understanding what the different levels mean in the context of the presenting patient. Is 180 bad? When is it bad? Is it ever good? Is 50 bad? When is it bad? Is it ever good? Is 100 bad &c? If it is bad, what can you do to make it good? Do you do the same thing if it is bad 200 or bad 20? What do you not do when it is bad 200 or bad 20? Which conditions complicate with which medications affect your intended treatment for a bad reading?

Meanwhile, over here in the dev world, our passionate mantra is "we shouldn't be judged poorly for 'everyone knows' shit we can just google"...


Fair points, so let me explain my reasoning there:

First of all, in the case from my past, the machine labeled it -- "mmHg". They could have read it off, had they expected things to have units that matter.

Second, it's part of the conceptual understanding of what blood pressure means -- that e.g. it's not some normalized percentage, that it indicates how much the fluid inside is pressing against the outside, that it's relative to atmospheric pressure, that we live in and expect a pressurized environment.

Third -- I mean, you read it out every day, wouldn't you ever wonder what it means? Imagine a Java dev that doesn't know what System is, only that it's the beginning of the stdout print statements they use.

Someone who merely knows that "180 bad, 120 good" has a disconnected understanding, like the expert who can literally do nothing more than plug numbers into an equation but not know what the numbers mean or whether you're measuring them correctly to be compatible with it. It's not enough.


Seriously? Even I knew the first one and unless I'm quite mistaken the second one can be felt by just raising your arm -- or, better, turning your head upside down.


Its only recently that people have started trying to create diagnostics from blood droplets. Back in the day the technology wasn't sensitive enough.


Ah that makes sense. I was thinking it from the wrong way -- i.e. the way Calvin's dad explains how they discover the weight limit for bridges -- no need to test from small to minimum-amount-needed when there's a certain, reasonable volume of blood that pretty much works all the time (and I'm assuming that's been tested to some degree). Though I'm still surprised blood-drop-variance wasn't just something studied frequently out of scientific curiosity and because it seems relatively easy.



The volume of venous blood for everyday blood test is indeed well known, and has been established decades ago. Take a look at your test tubes then next time you have your blood drawn. The required volumes are always clearly indicated.

This study is about novel tests based on minute volumes of blood. These are not routine in medical practice.


It's indeed not new at least for some markers it was known to be unreliable for a (relatively) long time: http://www.ncbi.nlm.nih.gov/pubmed/9365861


In general this doesn't seem like huge surprise. Blood is reasonably homogeneous, but noise becomes an issue when you're looking for anything present in low concentration (signal close to the noise floor) or when small differences in concentration matter (signal superimposed on background noise causes the value change from expected to be close to the magnitude of the noise).

If noise is too high for a single drop, a venous draw is a much larger volume and theoretically equivalent to sampling many drops of blood—it's the physical equivalent to averaging samples to increase the SNR.

The authors note[1] that averaging may not be enough though, and that there may be an interesting difference inherent to fingerprick blood (possibly caused by their collection method):

"Our data also suggest that collecting and analyzing more fingerprick blood does not necessarily bring the measured value closer to those of the donor’s venous blood (Figures 1D and 2D). For example, donor B’s hemoglobin and WBC concentration were similar for venous blood and fingerprick in drop 1 but became less concordant with additional drops, while donor C’s fingerprick measures came closer to the venous measures with additional drops. These data may represent true differences between fingerprick and venous blood, or they may be the result of errors in collection (such as leaving the tourniquet on for too long during a venous draw). Further research is needed to determine how common these patterns are."

1. http://ajcp.oxfordjournals.org/content/ajcpath/144/6/885.ful...


Considering that the body regulates its temperature by controlling blood flow to the extremities, maybe it's not surprising that venus blood in particular is not homogeneous. As the body cools blood flow to the hands is reduced. Venus blood is also impacted by movement, gravity, etc... Since there's no pulse in the veins, would it not make sense that the heavier constituents of the blood would remain as the lighter components "drain?"

I wonder if like how red blood cells would settle to the bottom in a test tube one might expect that there would be differing concentrations of blood constituents in venus blood in the extremities depending on the elevation, temperature, perfusion, etc... of the extremity?


As someone working in this field particularly on the product front and academic front - a major concern I have with this study is the lack of work done to establish what the clinical significance is in these variations. The methodology is well controlled enough to indicate that a statistically significant difference does indeed exist in the variations on the drop-to-drop level between venous and capillary samples, but what's missing is a detailed analysis of whether or not these differences would result in clinically different outcomes - from my work, the range of identifying an anemic, leukocyte spikes, etc. is large enough that the spikes in deviations in capillary samples ultimately become inconsequential. Furthermore dozens of studies [1,2,3 are just a few examples] in the past have found essentially the opposite outcome. A discussion is necessary - but suggesting that all drop based diagnostics will forever be inaccurate is both unbased and dangerous given the growing importance of this field. If anyone has specific questions feel free to drop me a line at ttandon[at]stanford[dot]edu

[1]http://www.hindawi.com/journals/isrn/2012/508649/ [2]http://www.ncbi.nlm.nih.gov/pubmed/23294266 [3]http://journals.plos.org/plosone/article?id=10.1371/journal....


Perhaps that's not the goal of this study? I think there's benefit in knowing that there is a difference, even if it's just in the ability to get further, more targeted studied funded now that a fundamental question has been answered. Now, instead of answering both whether there's a difference, and whether it affects a specific aspect of blood testing, future studies can focus more on the second aspect of that. I imagine that makes funding quite a bit easier to get.


Well, that is the end of Theranos. They should just return what money is left to their investors at this point. This explains why they could never get the tech right...


I don't think they will be stopped until they are either sued out of existence or people stop buying into it.


I don't know too much detail about theranos, but is that really the case? The article says they needed to go to 6-9 drops, is that a dealbreaker for what they wanted to do?


Getting 6-9 drops of blood from a finger tip is not easy - you have to ask "Is it really a benefit over a venous draw if it's much more difficult?" At some point I think it makes it easier to instead just focus on super-fine needles, and the need to draw less blood, from venous draws.


"Super-fine" cause mechanical haemolysis and will give inaccurate results for many routine blood tests


For anyone who isn’t a doctor / biologist: “Mechanical hemolysis” means the needle is bursting red blood cells, leaking their cytoplasm into the blood plasma.


Yes, that's true. I had thought about simply squeezing the finger to get some more but that appears to be a problem for the results too:

> Morris et al7 believe the higher variability of capillary blood compared with venous blood is due to the presence of extracellular fluid in capillary samples. In clinical practice, milking of the finger by insufficiently trained health care workers may result in even greater drop-to-drop variability than shown here.

http://ajcp.oxfordjournals.org/content/144/6/885.full

It's a very interesting problem. They didn't even see all the donors end up with the same average from drops as from a venous draw.


I seriously don't feel any pain 50% of the time I give blood and when I do it is a very mild prick. Usually its just the rubbing alcohol didn't all evaporate.


Also, I see nothing in the study that precludes the possibility that some of the variability cannot be mitigated with statistical methods. (For some reason the last time I posted a comment like this I was heavily downvoted by people claiming that the type of error in this case is somehow not amenable to a statistical solution, but as far as I could tell no convincing argument or evidence was given for such an assertion.)


I think they can't make a strong claim about it but some of the donors didn't seem to have the same results from an average of the drops and the other blood draw

> Our data also suggest that collecting and analyzing more fingerprick blood does not necessarily bring the measured value closer to those of the donor’s venous blood (Figures 1D and 2D). For example, donor B’s hemoglobin and WBC concentration were similar for venous blood and fingerprick in drop 1 but became less concordant with additional drops, while donor C’s fingerprick measures came closer to the venous measures with additional drops. These data may represent true differences between fingerprick and venous blood, or they may be the result of errors in collection (such as leaving the tourniquet on for too long during a venous draw). Further research is needed to determine how common these patterns are.

Also:

> Morris et al7 believe the higher variability of capillary blood compared with venous blood is due to the presence of extracellular fluid in capillary samples. In clinical practice, milking of the finger by insufficiently trained health care workers may result in even greater drop-to-drop variability than shown here.

This may present more of a risk for at-home testing if you need to make sure that the person hasn't squeezed their finger at all.

Quite an interesting set of problems that I'd not really considered before.

Full article: http://ajcp.oxfordjournals.org/content/144/6/885.full


I wouldn't put it that bluntly. This study focused on large molecule biomarkers for the most part. There are HUGE number of possible smaller blood based biomarkers that could very likely be normal in finger-prick blood.


Not the fun ones. Most of the small molecules are derivative indicators anyway. Any cancer indicators, many hormones, and other interesting molecules are proteins - and thus not statistically replicable in these tiny samples.


this is basically the common sense for any one with some proper medical training. and it's why many clinical scientists and medical practitioners (read: peers) keep questioning about Theranos from the beginning.


Something that's always perplexed me -- if we're talking about small amounts of blood, why finger-tips? This is such a sensitive area. Why not a prick on the elbow or the shoulder?


The skin is thin, easily accessed, and heals quickly.


I used to be a neuro tech, and I remember one patient coming in for a carpal tunnel test. She looked horrific - a gas heater had exploded and sprayed her a few years before, and all visible skin looked melted - she looked a bit like a skeletal-muscle model, as the melted skin had pooled in the divots between muscles. But what really struck me was that the only part of her skin that didn't look melted was the palms of her hands (and the inside of her fingers). They'd healed just fine, and the border between healthy palm and melted skin was like a knife edge.


If it's scientifically proven that a drop of blood can't be accurate, what is the alternative to going into the vein? Maybe you wipe down the wrist with alcohol, then put on a cuff link apparatus that simultaneously takes 20 drops of blood.


It appears this depends a lot on what you are testing for. Fingerprick tests for CRP count (basic inflammation test) seem to already be standard practice in hospitals and health clinics. The test machine is the size of a toaster and gives the physician direct results in a minute or two.

Also, for blood glucose, which diabetics need to monitor closely, there is some buzz now around spectrographic (IR/UV) techniques, meaning you don't even have to puncture the skin. That would be huge. Diabetics actually complain more about fingerprick tests for glucose than about insulin injections, which sounds very counter-intuitive to non-diabetics.

What glucose and CRP have in common is that they are small. TFA talks about tests for white blood cells, platelets or HIV, which are much larger. If you compare white blood cells to glucose, they have three orders of magnitude larger radius, so nine orders of magnitude larger volume.

That's like the difference in volume between a raindrop and a blue whale. No wonder different mechanisms may apply.


Also, for blood glucose, which diabetics need to monitor closely, there is some buzz now around spectrographic (IR/UV) techniques, meaning you don't even have to puncture the skin. That would be huge.

It would be huge, yes. But it has been "five years away from being available" for about 25 years. Glucose has a much smaller signal than the Hb / HbO2 signal used for pulse oximetry.

Diabetics actually complain more about fingerprick tests for glucose than about insulin injections, which sounds very counter-intuitive to non-diabetics.

Most non-diabetics think that "injecting insulin" involves hitting a vein. That would be far more painful. Instead, we inject into subcutaneous tissue; and the needles we use are absolutely tiny. It's worth noting that it's very unusual to get any bleeding from an injection site.


Ugh, that sounds absolutely horrible. I'd rather take a venous draw.


[deleted]


Metrics behind the primary purpose of blood circulation, oxygen/CO2 and energy delivery (to which you allude), aren't the parameters they found fluctuated:

>single fingerpricks on multiple subjects varied substantially on results for basic health measures like hemoglobin, white blood cell counts and platelet counts.

It would be interesting to see what sort of variance they observed.


I'm fairly sure there is an implied "(collected at the same point of the body)" after "every drop of a person's blood". In which case your argument, while elegant, is beside the point.


Finally! I wondered how people actually know that the genetic code is the same in every cell. How would you prove that? What if everyone is a chimera to some extent? We are just getting started understanding epigenetics.


This study doesn't address genetics. Rather it is measure amounts of soluble blood proteins.

For all intents and purposes, your DNA is the same in every cell. We know this because we can pool DNA from multiple cells and sequence them. We see orders of magnitude more variation from sequencing error than true intra-isolate nucleotide variation.

Yes epigenetics exists, and it influences our development and habits, but it's not going to change the content of your DNA.


It's not the same. Telomeric regions shrink over time. Viruses inject their own dna code into the host genomes. Minor damage piles up over the years.


Beyond even random changes and telomeres shrinking, VDJ recombination actively splices genes from B-cells and somatic hypermutation further modifies certain regions. Sampling two B-cells that share a common ancestor will likely have some genomic differences.


For additional takes on this, there was some discussion of this result yesterday, based on a post of a direct link to the paper: https://news.ycombinator.com/item?id=11159526




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: