Hacker News new | past | comments | ask | show | jobs | submit login

Can you share a story about this?



Here's an example from my own work:

I had a function that performed a 256-bit division. I thought there was a good chance it could be optimized, because the dividend was fixed: the function computed 2^256 / d. Surely, if the dividend was always the same (and such a nice, power-of-2 number!), there should be a way to exploit that, right?

I poked at it for a few hours before it occurred to me that I could ask someone else who knew a lot about division. So I cold-emailed Daniel Lemire. He replied with the following argument:

Suppose you had an algorithm that could compute 2^k / d much more efficiently than the typical algorithm. You could then use this algorithm to speed up any division n / d, by rewriting the expression as (n - 2^k)/d + 2^k/d. This works because both dividends will now be less than 2^k, and division algorithms get faster as the dividend gets smaller.

In other words: it's not impossible that there's an efficient algorithm for 2^k / d, but if it does exist, then all the people who have dedicated their time to optimizing integer division over the years have missed some enormous low-hanging fruit.

That was compelling enough for me, and I gave up trying to optimize my function. :)


Feels very much like an appeal to authority though. Fast inverse square root was discovered/implemented as a side task for a first person shooter: https://en.wikipedia.org/wiki/Fast_inverse_square_root.


Appeal to authority is only a fallacy when the person is an authority in an unrelated area. Appealing to an actual authority of the subject in question isn't a fallacy—it's the whole point of having authorities.

"90% of dentists agree you should own a motorbike" is an appeal to authority fallacy.

"90% of dentists agree you should floss" is not.


You are mistaken.

An appeal to authority is any logical argument that relies significantly upon the authority of the person making that argument as support for that argument. It then becomes circular and flawed.

If you were to replace the person who is actually speaking with anyone else, and look at the same argument and its support, and it doesn't hold up, that's an appeal to authority.

It doesn't matter who the authority is. Just that there was an authority used to support a significantly unsupported argument


You are mistaken, and I will argue my claim by deferring to Wikipedia, which is an authority on fallacies:

https://en.wikipedia.org/wiki/Argument_from_authority


First sentence:

> An argument from authority (argumentum ab auctoritate), also called an appeal to authority, or argumentum ad verecundiam, is a form of argument in which a claim made by an authority on some topic is used as evidence to support one's own claim.

Further down:

> Historically, opinion on the appeal to authority has been divided: some hold that it can be a valid or at least defeasible.

You seem to be arguing for appeals to authority as defeasible.

Incorrectly understanding the authority you are quoting shows how problematic it is to make pure appeals to authority.


I read GP as a joke.


I don't think that's the definition of the fallacy most people have.


argumentum ad populum much?


Lol. But seriously appeal to authority fallacy need not be only when the authority is outside their domain. For instance when authority makes a bad argument.


The dividing line here is that it's fallacious to argue that something is true simply based on authority. That doesn't hold where there are external cogent reasons for following that authority (eg. in this example: it's a difficult problem that's well studied by many, and the body of work has not yet yeileded an answer such that it's unlikely that a more casual hunch will do so).


I'm not saying that this type of proof was applied perfectly here, however, the notion of proving that solving problem X must be hard because solving it would solve problem Y, which is (at least for now) known to be hard is a fundamental methodology in the field of computational complexity theory. There is a strong academic basis in these types of lines of reasoning, specifically in computing and optimization.


I'm not a trained computer scientist but didn't Daniel Lemire (purportedly) answer a different question from what OP asked? In OPs question k is fixed, wouldn't that open up avenues to implementation different from when k is not?


More like noting that many people before have tried and failed.


I can see why you puzzled over this. I’m not sure I agree with the guys argument — seems more like that there isn’t asymptotically better algorithm. It’s not convincing to me that there isn’t one division algorithm that’s a constant factor better.


IIRC you are correct, there are one or two spots in the Knuth division algorithm where having a power-of-2 dividend lets you eliminate a shift or an addition or something -- but nothing (nothing obvious, at least!) that yields a significant speedup.


One thing I like to ask myself is how did things get to be the way they are?

>all the people who have dedicated their time to optimizing . . . over the years have missed some enormous low-hanging fruit.

When I was a teenager one of my most worthwhile early chemical breakthroughs as an undergraduate was a typical lesson (these are dubbed "experiments") where students produced a product from raw materials and were evaluated on their final yield of acceptable recovered product, compared to the theoretical amount that could be obtained if perfection were to be achieved across all steps in the reaction and subsequent processing & handling.

This was a reaction that was known not to go very far to completion when equilibrium was reached to begin with, so most emphasis was placed on technique for subsequent separation and purification of the desired product recovered afterward. Good yield was considered 25%, poor at 10% or less, the historical high score was 27%. It was also accepted that the final product was not considered very stable, subject to degradation from exposure to things like heat, air and moisture, and there were some literature references this was based on.

Basically an overall view of the combined laboratory techniques applied across the study, compared to how the other students were doing. The product was not actually a commercially useful material but it had proven worthwhile in this regard.

The university had been doing this same challenging competition for decades, designed back then by a still-active professor, and it was considered a good comparison of how each year's students were performing on the same real-world problems over the decades. Same old same old.

I went through it one time and it came out pretty good, but before refining my technique I dived a bit deeper into the physical properties of all chemicals involved, in my case looking for a way to drive the equilibrium further to begin with.

One of the well-known ways to drive reversible reactions to completion is to remove product as you go along, not always easy but also quite essential in many industrial processes.

Seemed to me distillation would be most feasible except the product was a solid and one of the raw materials was a liquid having a known boiling point, much lower than the expected boiling point of the dissolved solid raw material product in the reaction flask, and the solvent much lower in boiling point than that.

Ended up vacuum distilling from the incompleted reaction flask where the water of reaction was removed along with the solvent, "excessive" heat actually helped complete the reaction before the remaining lower-boiling raw material was vaporized, and the desired solid product had a lower boiling point under vacuum while melted than the remaining "heavy solid" raw material, and the good stuff was recovered in over 50% yield as an oil in it's own dedicated receiver, which crystallized wonderfully by itself with no further purification needed. Earned me my first A+ and encouraged me to continue going further than average ever since.

Turns out all the original professors were wrong about heat instability, and also had never fully considered as many physical properties of the exact chemicals being worked with, only similar materials for which there was much more common knowledge and published references.

The final lesson was that sometimes the most respected elements of "common sense" amount to more or less "common lack of sense".

Also, I've said this before, when all recommended solutions have been attempted and failed to deliver, the actual solution will be something that is not recommended.


I often approach this from a “upper bound” and “lower bound” scenario perspective. So suppose I am trying to extract a signal from data that is noisy and sparse. Often the most time consuming part of solving the problem is figuring out how to deal with all of the noise, and I don’t want to waste time doing that just to come up with nothing in the end.

A “upper bound” approach would be to simulate some data that is perfectly clean and meets all my model assumptions, and test my basic method on it. If it doesn’t work on the clean data, it won’t work on the noisy data.

The “lower bound” approach would be to try the simplest and dumbest thing I can think of to deal with the noise in the real data. If I can pick up some amount of signal even with the dumb approach, it makes me much more confident that if I spend time making a sophisticated noise model, it will be worth it.


Probably not quite what you wanted, but P=NP. When you look at what does that imply then it's hard to think that it holds.

Scott Aaronson has a checklist on how to quicky reject a P?=NP paper, and one of the best methods is to check whether the paper proves something weaker first.


Back off the envelope calculations can usually quickly tell you if something is possible or not.

For physical problems. Newtons law of conservation of energy is an obvious first sanity check. Given some objects mass and the desired travel, you can get the required energy and from there the required power. Before you even start designing a prototype.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: