Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Arithmetic in Computer Hardware (thelast19digitsofpi.github.io)
99 points by alexahn on Nov 17, 2023 | hide | past | favorite | 19 comments


These "fundamentals" articles are always good to see more of. That said, I'm not convinced that arriving at two's complement via a long detour through sign-magnitude and one's complement is of anything but slight historical value. IMHO two's complement is easiest to understand by considering how a binary counter wraps around upon incrementing; if 1...1 wraps around to give 0...0, and then 0...1, isn't it obvious that if you go the other direction and decrement 0, 1...1 can be interpreted as -1? Then you split the range in half, letting 0 be on the positive side, and that's all two's complement is.


IEEE-754 is sign-magnitude (well, I suppose, technically sign, magnitude, exponent). It’s not obsolete.

C++ only recently (C++20 or 23) decided to declare that signed integers had to be twos-complemente.


The other strange thing about IEEE-754 is the exponent being biased, which is almost never used for pure integer representations. (Biased integers are not supported by C because their sign bit is 1 for positive numbers, but C requires it to be 0.)


I love IEEE-754 because it’s so trivially simple to understand at the surface level and at the same time deeply complex. A short standard followed by an enormous number of detailed, non-obvious corner cases that occur in the real world and that all had to be handled by the standard. And it has really stood the test of time!

It’s a good example of something produced by some very smart people with deep theoretical knowledge and deep experience in both things that had worked and even more things that had seemed like a good idea at the time but had turned out to be problematic in practice.

It’s a fascinating standard. The process was like the IETF’s but deeper, which is rarely possible.

(I realize the word “deep” was used a lot in this comment, but it seems appropriate)


imo the best way to explain 2s compliment is via a diversion through 2-adic numbers. that also lets you explain the really fun trick where you can store multiplicative inverses within the integers.


I dunno if its "best" when compared to just regular old Z_(2^32)


Z_(2^32) is a really good explanation for unsigned integers, but to me at least it's not a great explanation of why signed integers are the way they are.


But the justification is the same, congruence modulo 2^n

Z_4

     0  1  2  3  4  5  6  7  8  9 
     0  1  2  3  0  1  2  3  0  1
     0  1 -2 -1  0  1 -2 -1  0  1


I'd extend your explanation to the why (do it that way).

8 + -2 = 6 as you would expect.

So a subtraction is basically the addition of a negative number and you don't need any extra hardware to keep track of that.


In a sense, subtraction and division don’t really exist as full fledged operations on the reals anyway. They’re just convenient notation for additive/multiplicative inverses.


The reals are irrelevant to a machine, because Almost All Reals Are Normal you definitely can't do real arithmetic with a machine.

Floating point is just the integers wearing a funny hat. Fixed point is a different funny hat. The rationals are two integers in a trenchcoat.


You can do a meaningful arithmetic with any real you're able to enter into the machine [0], and the tidbit they took the time to share makes sense even in that sort of restricted context.

[0] https://blog.acolyer.org/2020/10/02/toward-an-api-for-the-re...


That's actually a really interesting piece of software, thanks for the link!


I like the defining equation, x + ~x + 1 = 0 when lets you define -x = (~x+1) by leveraging the definition you should already know regarding additive identities


my preferred reference for Computer Arithmetic is the book by Behrooz Parhami. Goes from the beginning into great detail and it is very easy to read. It was my go-to reference when writing floating points units in VHDL.

https://web.ece.ucsb.edu/~parhami/text_comp_arit.htm


Seconded. Parhami and Ercegovac ( https://web.cs.ucla.edu/digital_arithmetic/ ) are my personal go-to digital arithmetic references.


thank you, this is great information; an informed recommendation in fields like this, where bad reference books abound, is good as gold

however, on casual inspection, the page doesn't seem to have the book available for download; possibly it is not open access


> So once we have a full adder, all we need to do for multiple bits is to chain a whole bunch of them together!

Heh. I'm not sure if this is a joke or an intentional oversimplification, but the design space for adders is pretty large, and there are all sorts of interesting adder architectures. The author describes what's called a "ripple carry" adder, which is perhaps the conceptually simplest adder, and is very efficient in logic area but very slow.

Modern adders in your M2 Pro are (just guessing) probably a much more interesting hybrid parallel-prefix adder architecture (e.g. part Kogge-Stone, part Brent-Kung, part Carry-Select or Ripple-Carry) that sits directly on the multidimensional Pareto frontier for delay/area/power.


This reminds me about an interesting distributed project, which was trying to create redundant circuits to compute various arithmetic operators.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: