It’s part of a cheeky joke. There’s an adage that if you really need to find a correct answer, go online and confidently give the wrong answer. More people will jump on to tell you you’re wrong than would answer the question if you asked.
That’s called Cunningham’s law. The joke (which is actually quite funny) is that GP gave the wrong name for it and proved it.
It wasn’t clear and I understand the confusion but welcome into the joke, my friend. :)
Yeah, I think that’s the sentiment. I do wonder how true it can be, however.
I wonder how it is in various states, but at least where I live the actual plumbers get licensed, not the plumbing company. So what can they do to stop it? NDAs? Non-Competes? The state also takes a dim view on such things until you are paying someone a significant amount of W-2 money.
SEO? Brand recognition?
Well anyway, a few months ago I shared a table at a restaurant with someone that worked at a family office, and was buying up landscaping companies. I asked some questions in a polite and friendly manner; nothing too pointed or invasive. But came away with the idea that there is a lot of money out there looking for something to do, and not a lot of good ideas.
Creating ever more onerous/costly/time-consuming requirements for lengthy apprenticeships or other licensing gatekeeping is one common way to restrict supply, all while appearing to be serving the public interest.
"where I live the actual plumbers get licensed, not the plumbing company."
Which then leads to plumbing shops were the licensed plumber doesn't even work there and just rents out the license to the business so it can operate. Plumbing certs are a complete pyramid scheme.
You don’t check the organs. You check the process by intermingling known HIV+ samples and check if they are being detected.
> Checking means re-testing, so might as well get rid of outsourcing.
Thing is you need to do QA on the testing system no matter what. Doesn’t matter if it is performed by contractors, in house staff or little grey aliens. If you are not doing QA you won’t know if the testing is done correctly or not.
In this particular case you're saying you need to test the organs once at the outsource place and then again at the hospital? Why not just get rid of outsourcing then?
No, that is not what the parent said. "Check an verfiy" can come in diffrent forms and tastes eg. having some samples (not all) checked by another lab, asking for standards and inspection performed by 3rd parties, asking and checking for documentation...the hell how do you think anybody could work with suppliers?
> eg. having some samples (not all) checked by another lab,
I don't think that is useful at all in case of rare diseases. You would just get two reports saying that the random sample is free of HIV.
Much better would be to send some known control samples. Making sure that some of the samples is known HIV+, and then check if the supplier can tell which ones are those.
You can still do this kind of audit, but you need to test a statistically significant number of samples in your "spot check" such that you know you some of them will be infected. The number will vary depending on the incidence of a particular type of infection present, but this is data that should be available.
I agree that sending control samples can also be effective, though. But if you need to send the whole organ to the test lab (and not just a small tissue sample), you probably don't want to be wasting healthy organs by infecting them. Better to just wait until you have an organ that's known to be infected already.
> But if you need to send the whole organ to the test lab
Why would you need to do that? Realistically the sample needed here is a small vial of blood from the organ donor’s body.
> You can still do this kind of audit, but you need to test a statistically significant number of samples in your "spot check" such that you know you some of them will be infected.
Nah. It really doesn’t work. The problem is that HIV is very rare. (HIV incidence per 1000 population adults 15-49 in Brazil is between 0.34 - 0.45[1])
Let’s be ultra conservative and set the “spot check” rate at 100%. That is you are sending samples from every single body to two labs. Because of the low incidence rates you would still expect hundreds and hundreds of those samples to return as negative from both labs. This might work if you would somehow have a “gold standard” lab you trust and an other “less trusted lab”. But in reality there is no such a thing as a “gold standard” lab you can trust without QA. (And if there would be you would just use them, instead of the other lab.) Even with that ridiculously high “spot check ratio” you wouldn’t know if you are getting negative results because they are in fact negative, or because both of your labs are falling for some reason and giving you constant false negatives.
In conclusion spot checking the results with a second lab simply doesn’t work. Even if you spot check every single organ donor you would be still blind for even the most basic error cases for unacceptably long times.
On the other hand if you intermingle a control sample into every single batch that changes the game. Lets say they run the tests on batches of 10 and you make sure that a random one of those is always known to be positive. Now if something goes wrong and they don’t detect the sample you can straight away reject the whole batch of tests as faulty. And it only costs you an 11% extra over not doing any QA.
So with the “spot checking” test you can pay as much as 100% extra and still not know if the tests are having the most elementary kind of fault for hundreds and hundreds of organ donors. Or you can go with the “control sample” strategy and have a reasonably high confidence for every batch right away at much less of a cost. Yeah you can do the “spot check” audits but it is ridiculously bad at catching issues even if you spend a lot of money on it.
I agree with you, also the bogus argument of "since most people are HIV free..." assumes direct testing instead of pooled testing (using modern information theoretic optimized pooled testing).
A bit of data is most informative if the entropy is 1 bit as well. A signal that is true most of the time, or a different signal that is false most of the time is less informative. Use pooled testing such that the result is true or false half of the time.
Had information theoretically justified pooled testing been applied from the start, then:
* 1) control-testing the testing contractors would have been straightforward and passing 10 control samples by chance would have a likelihood of 1 over 1024.
* 2) it would have made obvious that saving money on control-testing the contractors would hardly save any money
* 3) even in the bad scenario that control testing was skipped, the issue of contractors cheating would have surfaced much faster, since combining the pooled tests to identify which patient tests positive would constantly result in mysteries, meaning control-testing needs to be enabled, not the mathematics of pooled testing brought in doubt.
* 4) testing pharma industry hates pooled testing, as it means technological competition instead of sales growth by abusing the naive but false "common sense" that you need as many tests as patients tested.
on a side note: assuming tests with different operating point on the RoC curves (having different false positive vs false negative ratios) have different prices, do we know if the operators blatantly provided fabricated results, or if they blatantly ignored basic mathematics and thought the more expensive tests could be substituted by the cheaper ones even if intended for a different purpose?
consider a test designed for telling a patient that we diagnosed HIV, and then consider a test designed for screening an organ to be inserted into a patient.
do you think they should both use the same test? or do you think it wiser to have the diagnosis test have lower false positive rates, and the organ screening test to have lower false negative rates?
Yes, why not? You don't re-test every single one, though: you spot-check a statistically significant percentage of them. Or maybe you do check all of them, but only for a one month period every year (a month that changes every year, and isn't known to the testing lab, so they can't game the system).
Another option is to send "control samples" to the testing lab, something you know already is infected with something they should be testing for. Do this enough times, and you'll know if they're accurately reporting the bad samples.
This type of thing is the only way for anyone in any kind of organization to verify that their outsourcing is effective and they're getting the result they want.
Outsourced companies deal similar issues internally while also forcing you to trust their management. Internally this kind of corruption is more difficult because you have more control, and fewer people are going to cooperate. Similar to how companies can regularly use untrustworthy low level employees handle cash.
You can still get rogue employees in ether case, but an outsourcing company is like a ready made conspiracy where any corners cut automatically turns into money.
> Internally this kind of corruption is more difficult because you have more control
If we anthropomorphise the regulatory body, sure. In reality, there isn’t evidence either way. Corrupt governments handing work to the private sector is a proven efficiency booster. Meanwhile, competent governments Severn Trenting everything is textbook (on the political left).
Outsourcing and corruption isn’t limited to government agencies. Quite a lot of it is from companies to other companies or governments to company A and then from company A to company B where the subcontractors are at issue.
Outsource to 2+ contractors, use pooled testing, and use control tests to steer that percentage of tests towards those contractors that score better on the control tests. Obviously the contractor should not be allowed to know which samples are control tests.
"In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906,[1] building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889.[2] Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations"
If you connect any two points, it lies outside the curve. Which is basically the intuition for Jensen's inequality: if you go partway between two points it's above the curve, so the weighted average of the curve at those two points is bigger than the curve at their weighted average.
A convex function is a function that is bowl shaped such a parabola, `x^2`. If you take two points and connect them with a straight line, then Jensen's inequality tells you that the function lies below this straight line. Basically, `f(cx+(1-c)y) <= c f(x) + (1-c) f(y)` for `0<=c<=1`. The expression `cx+(1-c)y` provides a way to move between a point `x` and a point `y`. The expression on the left of the inequality is the evaluation of the function along this line. The expression on the right is the straight line connecting the two points.
There are a bunch of generalizations to this. It works for any convex combination of points. A convex combination of points is a weighted sum of points where the weights are positive and add to 1. If one is careful, eventually this can become an infinite convex combination of points, which means that the inequality holds with integrals.
In my opinion, the wiki article is not well written.
A blast from the past! I was in grad school in 2014 when I learned about racetrack memory applications using magnetic skyrmions, they were pretty hot because they were considered topologically-protected spin textures and around that time the Nobel prize was/would be awarded to topological phase transitions and topological phases of matter, so the grant money was flowing, and this guy Matthias Klaui from the article was a bit of hot shit in this niche field I was in.
I remember at the time magnetic skyrmions could only materialize at low temperatures in materials like FeGe that had to be grown in a specific crystalline phase, B20 if memory serves. Fast forward to today and people can nucleate skyrmions at room-temperature using multilayers of more conventional materials, so at least that was some progress.
What never materialized was a disruptive technology, or even a technology. This racetrack memory thing was affected by the most common of magnetic domain wall defects: pinning. The so-called 'topological protection' promise never came true, skyrmions get pinned by defects just like regular domain walls and so then.... poof! I was fortunate to have found failure early on my skyrmion research and moved elsewhere, but at the time, boy was there froth everywhere about the revolution that's coming!
10-15 years later and this thing is still relegated to the lab. And truth be told, I still think this whole magnetic skyrmion thing is the same thing as magnetic bubble domains, it's just that we could study things in greater detail today and learned that these bubbles have chirality, but it always felt like this was more of a re-discovery or further refinement of something already known, rather than this new, hot, revolutionary thing it was hyped to be, but hey, maybe that's how you get money no?
I just find it fascinating how wrong the predictions were, how little of the promise/potential was actually realized, and what a waste of energy to be stressing about these things! Man, grad school was this weird reality-distortion field.
My question, and I know no one knows the answer, is are we afraid of a ghost? The ghost being a person who was all in on Biden, that now goes: "fuck that, I'm not voting for Harris"
Are there large numbers of people like this? Nobody knows, but the media certainly does a good job of pushing that narrative. Informing you is not their job, getting you scared and angered is.
I'm gonna vote for the non-fascist one. I wish Biden had stepped down much sooner, but it doesn't change the reality that I am scared of the violence that might arise if the right wins. Our country is in a state of corruption, the supreme court needs to be radically reigned-in and I only see one path before November to have a chance at addressing all these issues, and it's voting Democrats as much as we can.
I don't believe the LIGO experiment is utilizing entangled photons. This experiment is using a very different type of interferometer because they're trying to be sensitive to rotations, whereas LIGO's interferometer is for measuring changes in length. LIGO's biggest problem is how to minimize losses in their mirrors.
LIGO could potentially utilise EPR entanglement between photons in different parts of the detector but it does not do so yet. That’s a potential future development. It does use quantum squeezing though
That's incredible!
reply