Hacker News new | past | comments | ask | show | jobs | submit login
0x5f3759df and the fast inverse square root (2012) (p5r.org)
125 points by carljoseph on Oct 28, 2014 | hide | past | favorite | 25 comments



If anyone wants a fast atan2 that I wrote a while ago (2007), for making a microcontroller navigate, it's at http://robots-everywhere.com/portfolio/math/


Did you compare its speed to built in atan2 function?

I counted the cycles for you method and it comes to around 50 plus a conditional. Built in function should be around the same.


how's it compare to a CORDIC?


I wrote it for the Parallax Propeller, which is slow at doing divisions - so I tried to keep those to a minimum (one, which isn't too bad).


With a CORDIC the only divides should be by powers of 2 in the main loop and taking out the gain factor at the end, all of which can be turned into multiplies?


Thanks, I'm going to go back and try it :)


Tl;DR :

The fast inverse square root is based on the fact that the integer representation of a floating point number is a rough approximation of its logarithm.

So convert floating point to its integer representation. So now you have its approximate logarithm. Now take half of that and improve that with some Newton raphson.


So convert floating point to its integer representation.

Would something dirty like this be feasible in rust?


AFAICT transmute would work: http://rustbyexample.com/staging/unsafe.html

That's what's great about Rust, you can write unsafe code when you need it and it's isolated in `unsafe` blocks for easy auditing.

EDIT: Didn't test it too much, but looks like it works.

    fn fast_inv_sqrt(x: f32) -> f32 {
        let f: f32 = unsafe {
            let i: i32 = std::mem::transmute(x);
            std::mem::transmute(0x5f3759df - (i >> 1))
        };
        f * (1.5 - 0.5 * x * f * f)
    }

    println!("{}", fast_inv_sqrt(1.0));
      //=> 0.998307
    println!("{}", fast_inv_sqrt(255.0));
      //=> 0.062517
EDIT2: Shorter, more readable version.


It would be exact if we used a logarithm based number system: http://en.wikipedia.org/wiki/Logarithmic_number_system

This kind of system is beautiful, but there is no easy way to add and subtract.


I'm really impressed by the generalization to other powers, including the regular square root. It's the first time I've seen that hack (actually even the author mentions he found nothing on google). I think I could definitely have used that when I needed to compute square roots of fixedpoint numbers with no HW support, it looks very significantly faster than the iterative "by digit" method.


There is also a Wikipedia page for this:

http://en.wikipedia.org/wiki/Fast_inverse_square_root


And the moral of this story is: Never trust any floating-point number whose bits you haven't polished yourself.


how does he adjust his σ value ?


Rarely I'm that amazed at encodings.


I may be going on rampage here but "It does contain a fair bit of math" really grinds my gears!

Of course it does, it is about square roots and inversions.

In most of the cases, in order to optimize an algorithm or solve a problem, mathematics will do it for you, and that ranges from simple arithmetic properties to base changing theorems in linear algebra.

You just can not tackle such subjects without expecting it to contain a fair bit of math .


Seriously, computer science itself is a branch of mathematics. Or is mathematics a branch of computer science...?

It's not always weird bit-level arithmetic, but if you aren't using math at some level, you're not programming; you're just futzing around with a text editor.


Well, at some level almost everything can be described as using math. However, loads of software used by millions and netting millions can be and is written by people who most would describe as barely mathematically literate, from most sorts of business/accounting software (unless arithmetic is math) to device drivers to dating sites (where you could and will use heavyweight machine learning to match people but it kinda seems that having actual people on the site delivers 99.99% of the value regardless of said machine learning.)

There are counter-examples in every area, of course, it's just that the majority of programmers out there aren't great at math and are very much employable. And more often than not, people who're really great at math go to great lengths to avoid programming even when their work revolves around computers.

CS is a branch of math, of course, but this fact happily co-exists with all of the above.


All I do is make buttons blink with jquery and I'm supposedly an engineer.


If you also make things happen on the click event of the button then I would say that counts.


loads of things do not require math (in the sort of "doing stuff with functions in (R -> R), or multiplications/division).

Pretty much no sorting algorithm requires mathematical knowledge to work through, nor do many data structure algorithms. Concurrency resolution issues neither.

This is of course, so long as you consider math to be "do stuff with these numbers." Group theory and the like can help guide your results on a more abstract level.

EDIT: what I meant by this is the sort of "number-y" math that people usually mean when saying "warning: Math ahead". I realise that abstract modeling is also a part of mathematics, but that's not what people usually _mean_ when said in the context of what parent post was mentioning


> ...math (in the sort of "doing stuff with functions in (R -> R), or multiplications/division")

This is not what math is. It's more what calculus is and math is a lot more than calculus. Most mathematics is not about numbers at all.

BTW: Interesting results about sorting algorithms (e.g. the lower bound of O(nlogn) complexity for comparison-based sorts) require maths (specifically combinatorics, permutations etc). Concurrency involves math, too. See the famous happens-before relationship which involves notions of sets (of executions), transitivity, partial ordering, etc.


> See the famous happens-before relationship which involves notions of sets (of executions), transitivity, partial ordering, etc.

[OT] What's the probability of two comments with very similar views appearing at the same time :-). I too did comment almost exactly same thing!

PS: Ducks from down-votes.


> Concurrency resolution issues neither.

I beg to differ on this one. Proving, or at least reasoning about, concurrent algorithm does involve wrapping your head around happened before model, logical clocks, state machines and such. Most of these concepts are grounded in discrete math; group theory, partial ordering and so on.


> Pretty much no sorting algorithm requires mathematical knowledge to work through, nor do many data structure algorithms. Concurrency resolution issues neither.

None of those requires _calculus_. I disagree about them not requiring maths.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: