Hacker News new | past | comments | ask | show | jobs | submit login
AI Can Detect Illnesses in Human Breath (nvidia.com)
210 points by jonbaer on June 28, 2018 | hide | past | favorite | 74 comments



I read the paper, they had 11 patients with known cancer and trained the NN to detect volatile organic compounds in the breath from raw GC-MS (gas chromatography mass spectrometry) data. From what I understand, the interesting and novel application is in the NN learning to parse the data for the detection of the aldehydes they were interested in. Nothing about this paper involves testing in patients where you aren't sure if they have cancer or not from what I read. Lots of false positives as well. Furthermore, whether or not this type of breath testing is useful in diagnosis is an open research question.

In my opinion, from a medicine standpoint this paper is pretty far from being applicable to patients, and may never be, although that is par for the course for a lot of medical research. I'm not smart enough to comment on the ML details.


I was thinking how things like this could revolutionize healthcare by reducing the need for doctors, who we never have enough of. But you know the saying, AI is always the thing we can't do yet? Meaning the goalposts shift, and stuff that was previously considered a goal of AI research is now just boring automation, and so AI is still a distant dream. Well it occurs to me that there is something similar with healthcare: healthcare is whatever is still expensive to provide. Regardless of how many life-saving treatments and inventions are adopted, there will still be some things that only hospital/clinic staff can help you with. And so we will always have the same bottleneck and people will always complain about access to healthcare, using this bottleneck as their barometer.


The beauty of AI in medical screening is that unlike driving, 90% accurate is still a valuable tool.

Imagine if your toilet said, "uh hey so your X levels seem unusual. This might be nothing but you should seek a doctor"


The problem here is that due to the base rate fallacy, accuracy percentages for medical tests are widely misinterpreted, so a "90% accurate" disease-screening toilet is a recipe for lots of unnecessary procedures and emotional trauma. Even at 99%, a positive hit for most diseases is still probably a false positive.


That entirely depends what the outcome of a positive screening result is. Screening doesn't really care about the base rate —and yes, if you were going to start chopping bits off immediately, it would matter— it's just there to weed out the negatives.

Doctors currently manually screen for many cancers, over entire demographics. It takes up an extraordinary amount of time, takes decades of training, and still misses cancer.

A breathalyser or toilet can sample you twice a day. That can feed into a risk analysis that pushes you onto blood screening (again, largely automated chem detection, only nursing support required). If you're still setting off alarms at that point, you go to the specialist.

And that's the point. Whereas they'd have been seeing 100% of their demographic, these machines are doing layers of screening for them. They end up only seeing a tiny fraction, where their job shifts to final confirmation and treatment.

What is really important is these machines cannot miss things. A low false positive rate is acceptable. False negatives over zero cannot be allowed because they'll delay or prevent further testing.


> A breathalyser or toilet can sample you twice a day. That can feed into a risk analysis that pushes you onto blood screening (again, largely automated chem detection, only nursing support required). If you're still setting off alarms at that point, you go to the specialist.

The long term, personalised health view behind this is what's interesting to me.

Medicine has a lot of wide-windows of what's normal for "people", but often the first signs of an underlying issue is that something that was normal for you at one end of a window begins to move, or change, in ways it hasn't before.

One sample of a blood test might tell you "that level is a little lower than average, but still not low enough we'd worry". But the historical knowledge that "I've actually always tended to be on the higher side of average for that measurement" changes the picture drastically.

Like you say, when it comes to looking at false positives of new tools like this, sure, a tool may be useless at giving you a one time 100% accurate reading, but if you can watch the trend of it over time that may be valuable.


That seems... ok?

It might even be a good thing. With enough regular feedback, we might emotionally adjust to the realization that "maybe you should get checked out" is not OMG I'm going to die.


There is a huge misconception about the value of screening in the general population. It doesn't do what most people think it does. Even with emotional adjustment it is entirely unclear that screening for things like cancer is ever good for the patient. Look at work by the oncologist Vinay Prasad: https://www.bmj.com/content/352/bmj.h6080


a) It's just not that easy to "everybody needs to adjust" -- people are different wrt. what causes anxiety, etc. It seems a priory quite plausbile that there's a significant genetic component to this.

b) "Get checked" may itself cause problems, for example: Scans for breast cancer involve radiation which may actually cause cancer. Biopsies can also be quite invasive. Obviously the risk is low, but depending on the exact numbers and it might actually be better to not get screened until one is in a known high-risk group (where the base rate fallacy doesn't skew the results so much).


> It's just not that easy to "everybody needs to adjust" Education exists for this.

> It seems a priory quite plausbile that there's a significant genetic component to this.

a priori

There's a very popular trend to imagine genetic components in every human behavior. It's not just incorrect, it's very dangerous.

> Scans for breast cancer involve radiation which may actually cause cancer.

That's easily handled using common practices. If breath and* blood test are positive do CT scan.


No. Just... no.


And if it was massively affordable, portable, etc.

(On a lighter note I just can't help but think of a few scenes from a particular movie when we approach new automation fields... "This one goes in your mouth, this one goes in your ear, this one goes in your butt. No wait...")


(That would be Idiocracy, a movie by Mike Judge from 2006. I enjoyed it quite a bit, go see it if you get the chance.

https://www.imdb.com/title/tt0387808/)


I know I know. I've seen it too many times already. I was just trying to be coy.

But I second your motion to anybody who hasn't seen it.


I assume you've already seen this, but if not, it's an infomercial for a fictional product that does exactly that. https://youtu.be/DJklHwoYgBQ?t=279


>and stuff that was previously considered a goal of AI research is now just boring automation, and so AI is still a distant dream.

I think that's exactly right, which is what makes it particularly amusing to look back at Hubert Dreyfuses' apocalyptic declarations that computers can't do things that resemble human-like judgment. One of those things was supposedly chess, which required "insight", which computers can't have. But once computers got better at it than humans, everyone kind of forgot about it and pointed at new examples.


I suspect we will only be satisfied when AIs play each other at an incomprehensible game of their own devising.


Similar things have happened. Two independent neural nets "spontaneously" developed their own communication protocol in an experiment. I'll see if I can find a link.



Humans, oddly enough, have a pretty insatiable appetite for not dying.

You can build someone a 10 bedroom mansion and then offer to give them a 20 bedroom mansion instead, and they might say, "Thanks, but 10 bedrooms is quite enough." But people generally won't say no if you can offer them a longer lifespan and/or an increase in quality of life.

Incidentally, if retirement age is fixed, increases in longevity lead to a higher proportion of society that is only consuming but not producing. Unless you want a decrease in standard of living, you have to compensate somehow, be it increases in efficiency, rapid population growth, or raising the retirement age. If you raise the retirement age, then medicine needs to help make sure most people are not only alive but can also actually be fit to work at age 70-80.


The strategy of longevity is to stay young.

You do not live longer if you become older in a biological sense.

https://youtu.be/Q2Z9pxGwxEY?t=214


I don't think we're ever going to end up replacing doctors, but in the same way that tech accelerates other industries, AI should accelerate healthcare. Tying "expert machine" to "machine learning" could massively improve the pace at which Emergency Departments can triage patients, for instance, or provide doctors another useful input tool when diagnosing tricky conditions.


Perhaps replacing some of the low-end GPs who essentially just walk through a decision tree.


We could make an AI to do the hellish coding and insurer reimbursement interactions, to free them up to do medicine. Most "low-end GPs" are still extremely impressive intellects.


And that is a key tenant of capitalism. Cost of labor is what drives the true value of money. It is also why computers will never take all our jobs and a universal basic income doesn’t make any sense. This isn’t just healthcare, it’s everything.


Tenet


Sensitivity and specificity scores or go home.


Even if the false positives were high, couldn't it be used as a first pass to indicate more accurate testing?


Sure, but publish the numbers so we can decide for ourselves.


They have numbers in the paper[0] linked in TFA. They trained on 29 samples and tested on 12 samples. On those samples, all techniques (NN, SVM, CNN) all had 100% true positive rates. The CNNs did best on false positives, giving on average 2-3 per sample.

Also, it is very important to note: they are comparing a CNN analysing a GC-MS[1] sample to a human expert analysing a GC-MS sample, which is basically a 1D plot of intensities of chemicals. This is not a comparison to a human smelling someone's breath.

Also, as far as I can see, they never compare to a GC-MS run with selective ion monitoring (SIM), which would be a lot faster for the human to analyse.

[0] (ResearchGate link, sorry): https://www.researchgate.net/publication/324921031_Convoluti...

[1] A gas-chromatograph mass spectrometer is a device which samples a gas and detects what chemical species are present. It's what they do at airports when they swipe that piece of fabric on your luggage and put it in a machine (GC-MS) to look for explosives. It's a nice tool, used extensively in labs worldwide, but expensive (usually a couple hundred thousand USD, depending on specifications).


Ah sorry, I was relying on the parent comment's statement that they hadn't published numbers. Hadn't actually checked myself. Glad that they did though.

That being said, those data numbers seem extremely small. In reading the paper, I see that they attempted to augment it a bit. But it still seems near meaningless given the tiny sample set.


Forty-one total samples? Um...its not quite anecdata, but call me when the N is a bit larger.


Maybe obtaining and analyzing these samples is a more complicated process than we think? For example, acquiring and analyzing a malware sample is not equivalent to labeling or classifying an image.


And I'd hope it would err on the side of false positives rather than false negatives. A false negative reading could be disastrous


Absolutely. Until then this is just marketing.


Does anyone know if TSA gate agents can see tumors, and if they've ever told anyone? Like, say, a golf-ball sized tumor in the neck? Would they have to be medically trained? Do they avoid or encourage hiring former radiologists?

There was an interesting story of a doctor who noticed cancer while watching TV and passed the word on. https://abcnews.go.com/GMA/Wellness/woman-appeared-hgtv-find...


There's no human operators for the body scanners anymore, due to privacy concerns. The images are analyzed by computer, and any anomalies are highlighted on a cartoon diagram for extra attention during the pat down.

Also, if someone had a golf-ball sized growth on the outside of their body, I'm sure the person is already aware of it. (The body scanners can't see inside the body.)


> There was an interesting story of a doctor who noticed cancer while watching TV and passed the word on.

It's not hugely frequent, I don't think, but it's not unheard of for this kind of thing to happen.

Most of the family are in the industry, and just watching TV or whatever you'll occasionally hear a comment about noticing a tremor or slight droop in someone's face that could be indicative of whatever.


Nope, millimeter wave scanners barely penetrate the upper skin layers, and I don't think they have the contrast required to differentiate even a skin tumor from normal skin.


>“Computers equipped with this technology only take minutes to autonomously analyze a breath sample that previously took hours by a human expert”

The AI is impressive, but I'm equally impressed that a human can sniff someone's breath for a few hours and diagnose cancer. Amazing.


"Didn't my receptionist tell you not to eat and drink only water for 24 hours?"


Also relevant: The woman who can smell Parkinson's disease: https://www.bbc.com/news/uk-scotland-42252411


Odd how the baseline mentioned in the article is better then human average. Why wouldn't the baseline be dogs, or existing non-breath tests?


From what I can tell, dogs worked well in the laboratory, but it did not work well in the field. In the lab they usually had 1/5 or so positive samples, but in the field, it was a much lower positive rate depending upon cancer and the dog handler would not be able to give positive reinforcement immediately in the field, also different dogs have different success rates. https://www.livescience.com/61234-how-dogs-smell-cancer.html


very cool! I wonder if they could sprinkle in some fake examples for the dogs so that they could give positive reinforcement more often.


I saw some studies recently stating that humans' sense of smell is actually as good as dogs.


The baseline is not humans smelling breath, it's humans looking at a graph.


Marketing


“Computers equipped with this technology only take minutes to autonomously analyze a breath sample that previously took hours by a human expert,”

They make it sound as if the human expert was also sniffing peoples breath, trying to smell cancer


Is this possibly related to the phenomenon of dogs knowing someone is sick?


That was my first thought too.


Worth pointing out that humans can also smell disease in human breath - broad odours point to disease of particular organs (liver, kidney) and also to some specific diseases (diabetes).


So can animals.

Having had a family member who was epileptic, I can attest he often had a unique odor on his breath shortly before a seizure occurred. Now, there are people doing research into training dogs to detect that and alert that a seizure is imminent.


Is this why old people smell old?


I don't think we know 100% how much specific diseases contribute, but there is at least ongoing research about that, and it seems there is general changes in body odour composition: https://en.wikipedia.org/wiki/Old_person_smell


No, that smell is a result of body odor change, not breath.


This should surprise no one.

This 2016 study[1] build a device which could detect 17 diseases via breath, and I think that was a manually built algorithm.

It's well known that dogs can smell cancer too (there's a 2006 study but I can't find it now).

[1] https://pubs.acs.org/doi/full/10.1021/acsnano.6b04930


Are there any estimates about releasing this technology for general use? Or does the (US) FDA insist on slowing down this type of diagnostic technology a few decades, or can we expect to see these as kiosks in every clinic over the next few months?


The new Theranos Blockchain is already manufacturing diagnostic units.


It will be more than a few months, but my understanding is that the FDA is now one of the fastest agencies to give approval in the world. See: https://www.propublica.org/article/fda-repays-industry-by-ru...


You don't seem to understand the actual problems with screening. Try learning about the field before implying the FDA isn't doing their job: https://www.bmj.com/content/352/bmj.h6080


it's research.... in england.

The orginal paper is here http://andrea.soltoggio.net/data/papers/skaryszEtAl-CNN-GC-M...

at the moment they are essentially saying it's a promising tech that helps experts and cuts down on their errors. It helps with one part of the analysis. Looks like it is a long way from anything general use, though it's on the path towards that.


This is interesting to see being used in real world. Still wondering what types of cancer it can detect. Maybe throat cancer, who knows.


I read a few anecdotes that said that nurses who were familiar with cancer patients would subconsciously identify an odor that was associated with them.

Then they would smell it in the wild and be struck with a sudden an anxiety about whether or not they should tell the person to get checked, given the odd nature of the clue.


I misread the title and thought "I" instead of "AI". I was like, I'm clicking now!


That doesn't sound dramatically different from being able to diagnose things from a blood sample.


It does from the perspective of the blood giver. Many people are dreadfully afraid of needles to the point their unwillingness to do preventative tests. Also reduces the knowledge needed to administrate the test and possibly the eventual cost of executing the test. No phlebotomists needed to exhale.


"...with better than-human average performance". Hmm. Is this a task that humans currently do well?


Soon, AI will smell your weakness and go in for the kill...


I'm getting strong Theranos vibes from this.


Like dismissing a brand new product prototype as being unlikely to be a big success, this is a very low-value comment, in that it can't be tested in the short term, and the penalty for being wrong in the long term is zero.

So you can comfortably make this sneer about every new medical innovation you hear about, losing nothing for being wrong but crediting yourself with intuitive powers if you're right, which you likely will be some day, the more so the more you repeat the move.

That said, for every scandal like Theranos, the likelihood of another one just like it diminishes (if regulators and investors are acting effectively and rationally), so it might be a while before your prediction comes up.

Let's hope so!


That was my first thought too. We'll be branded heretics, unfortunately.


Just know you are not alone.


I have suffered with severe nasal allergies as a kid. And had a hard time smelling anything because my nose was stuffed. I realized that I developed a sense of "smell" in my hand. I could actually feel the air. And tell what it smells like.

For those who can't believe it, smells have a pH, humidity, heat capacity, temperature, density, and pressure to them. So if you can train your skin to be sensitive and concentrate on those things you can feel smells.

It was not long till I realized that people give off a certain odor. And those odors are often related to the germs on their body. And the food they ate. Sometimes I could even tell if the person was sick.

I never developed a method of detecting illnesses simply by waving my hand on the person unless I was acutely familiar with the sickness, but if we can develop machines to detect "smell" (a fact I did not know) then we can definitely detect machines to scan people as they walk through a door and tell what germs they carry.

Of course my dream powers are to tell what antigerm ointment each person should use.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: