These "fundamentals" articles are always good to see more of. That said, I'm not convinced that arriving at two's complement via a long detour through sign-magnitude and one's complement is of anything but slight historical value. IMHO two's complement is easiest to understand by considering how a binary counter wraps around upon incrementing; if 1...1 wraps around to give 0...0, and then 0...1, isn't it obvious that if you go the other direction and decrement 0, 1...1 can be interpreted as -1? Then you split the range in half, letting 0 be on the positive side, and that's all two's complement is.
The other strange thing about IEEE-754 is the exponent being biased, which is almost never used for pure integer representations. (Biased integers are not supported by C because their sign bit is 1 for positive numbers, but C requires it to be 0.)
I love IEEE-754 because it’s so trivially simple to understand at the surface level and at the same time deeply complex. A short standard followed by an enormous number of detailed, non-obvious corner cases that occur in the real world and that all had to be handled by the standard. And it has really stood the test of time!
It’s a good example of something produced by some very smart people with deep theoretical knowledge and deep experience in both things that had worked and even more things that had seemed like a good idea at the time but had turned out to be problematic in practice.
It’s a fascinating standard. The process was like the IETF’s but deeper, which is rarely possible.
(I realize the word “deep” was used a lot in this comment, but it seems appropriate)
imo the best way to explain 2s compliment is via a diversion through 2-adic numbers. that also lets you explain the really fun trick where you can store multiplicative inverses within the integers.
Z_(2^32) is a really good explanation for unsigned integers, but to me at least it's not a great explanation of why signed integers are the way they are.
In a sense, subtraction and division don’t really exist as full fledged operations on the reals anyway. They’re just convenient notation for additive/multiplicative inverses.
You can do a meaningful arithmetic with any real you're able to enter into the machine [0], and the tidbit they took the time to share makes sense even in that sort of restricted context.
I like the defining equation, x + ~x + 1 = 0 when lets you define -x = (~x+1) by leveraging the definition you should already know regarding additive identities
my preferred reference for Computer Arithmetic is the book by Behrooz Parhami. Goes from the beginning into great detail and it is very easy to read. It was my go-to reference when writing floating points units in VHDL.
> So once we have a full adder, all we need to do for multiple bits is to chain a whole bunch of them together!
Heh. I'm not sure if this is a joke or an intentional oversimplification, but the design space for adders is pretty large, and there are all sorts of interesting adder architectures. The author describes what's called a "ripple carry" adder, which is perhaps the conceptually simplest adder, and is very efficient in logic area but very slow.
Modern adders in your M2 Pro are (just guessing) probably a much more interesting hybrid parallel-prefix adder architecture (e.g. part Kogge-Stone, part Brent-Kung, part Carry-Select or Ripple-Carry) that sits directly on the multidimensional Pareto frontier for delay/area/power.