So, with no scientific basis at all you believe there is a 10% chance that people can detect disease by smell, something that has AFAIK never been proven for any disease? We don't go to the doctor and have him or her smell our armpits. Do you use a psychic? What we have here is an organization that is basing their research funding on math tricks that I, ten years out of undergrad, maybe didn't remember exactly as it was characterized but was able to see through.
There are so many things wrong in this comment it's hard to figure out where to begin.
"We don't go to the doctor and have him or her smell our armpits" is an argument to authority. Just because a doctor doesn't use test X does not mean that test X is not useful. Every single diagnosis test we use today was, at some point in the past, unknown and unused by doctors. It is scientific research which gave us those tests. And, because you seem to be uninformed about the subject, I'd like to tell you that there are a number of things that a doctor will smell when they diagnose you. Famously, you can diagnose phenylketonuria by smell, and you can also diagnose diabetes by smell.
This comment also seems to reflect a fundamental misunderstanding of the scientific process. The whole point of scientific research--which requires funding, usually--is to figure out if a hypothesis is true or false. If you already know whether your hypothesis is true or false, you're not doing research, you're replicating results.
When you do preliminary research, it's because you don't have very good information about some particular subject. You're complaining about the shaky ground that they base their research funding on--but these scientists did the right thing. Because the hypothesis seemed improbable, they conducted a dirt cheap experiment. It's an experiment that you could have conducted yourself for $20.
> If you already know whether your hypothesis is true or false, you're not doing research
That's exactly what this is. People believe the claim 100% so we do a simple coin flip, and, yes, there it is. It's confirmed! No extraordinary evidence required for this extraordinary claim. Let the research money flow and the BBC reporting commence. If it were my money I would have another lab repeat the experiment.
You may want to revisit your stats, since there's no "math tricks" here. Even if you assume she knew that some sizable portion of the shirts came from AD patients, the p-val is quite low. If you assume every shirt was independent, so there's no knowledge to gained by knowing how many other shirts she classified as AD, the p-val is ludicrously low. And if you go Bayesian, no matter how you slice the prior probabilities, it's still low.
Look at the various analyses in the threads. Mine was a frequentist analysis based on independent categorizations, which has a p-val of ~.0002. Others have posted more sophisticated frequentist and Bayesian analyses based on priors and the subject having advance knowledge of the number of ADs present.
But no matter what assumptions were made, no p-val was greater than .001, which is quite low for n=12 with a single test. Our generally accepted threshold is p<.05. She literally had a perfect score.
Also, saying "An actual test would need to allow any possible sample including those that had zero Parkinson's patients" indicates you don't understand experimental design. Splitting the data into equal groups maximizes your chances of detecting something when effect sizes are small, since sensitivity is related to minimum group size. (P-values are hurt more by low sample sizes than they gain by large ones, which is why a 3/9 split is less powerful than a 6/6 split.)
Not only did she have a perfect score, but she "adamantly" corrected a mistake in the control group. That must have some impact on the Bayesian estimate too. Her correction is worth something, and her confidence in providing that correction is worth something.