The main selling point of this approach is that it can cut out a lot of code
In this talk, Sean Parent, at that time working on Adobe Photoshop, estimated that the PS codebase could be reduced from 3,000,000 LOC to 30,000 LOC (=100x!!) if they followed ideas from the book
https://www.youtube.com/watch?v=4moyKUHApq4&t=10s
Another point of his is that the explosion of written code we are seeing isn't sustainable and that so much of this code is algorithms or data structures with overlapping functionalities. As the codebases grow, and these functionalities diverge even further, pulling the reigns in on the chaos becomes gradually impossible.
Bjarne Stroustrup (aka the C++ OG) gave this book five stars on Amazon (in what is his one and only Amazon product review lol).
estimated that the PS codebase could be reduced from 3,000,000 LOC to 30,000 LOC (=100x!!)
I challenge anyone making such claims to construct an even _moderately_ realistic example of where such reduction would be possible.
Here is a very simple and already very optimistic model that is still way below a 100x reduction: say my program consists nearly exclusively of sorting of various arrays and I write out bubble sort in full each time using for loops etc. rather than call a sort() procedure. Bubble sort takes roughly 20 lines of code to implement, so I would at best get a reduction of close to a 20x, rather than a 100x one.
I don't doubt that your conclusion is correct, but what if the sort is used in 10 other functions, each of which is used 10 times? If you inlined everything you would end up with 100 copies of bubble sort. The code blowup is exponential in the level of nesting.
Indeed. Another venue for why it cuts out code is that it makes code more reusable. As a result, you are more likely to integrate nicely with already written libraries than write your own code. Technically, this is a way of reducing your codebase as well lol.
I doubt the 100x range, too, but Photoshop is decades old and, likely, has lots of its filters written in assembly for speed, with zillions of variants for all kinds of sometimes long gone CPUs and GPUs. It also still may have its own memory manager.
It that is true, it should be possible to get large wins in LOC if one either is willing to give up orders of magnitude in speed and memory usage, or is equipped with a sufficiently advanced compiler.
There likely also is some room for improvement if one is willing to change the file format (.psd is not known for its consistency), but that would likely be a rounding error.
Reducing a single bubble sort in code size doesn't do much, because it is used many times and its code size cost is well amortized. However, if you had many bubble sorts coded in your program with special cases, then consolidating those into one very abstract/parametric version could be a big win.
The reduction comes from making the code way more abstract and much more difficult to reason about in concrete contexts. So ya, it is probably possible, but no, our heads might not be very capable in writing/maintaining/debugging such code even if the computer could execute it. As an extreme, consider "very small code" contests in the demo scene, where various crazy tricks (self modifying code) are used to drastically reduce code sizes, none of which would ever be used in practice and have little SE value.
I'd expect more opportunities for reduction in code size to arise when you use fancy algorithms (not sorting) on generic algebraic structures that have many models in your program. For example, when I was a student, I implemented a library of numerical methods for my own use in class. It benefited a lot from being implemented as generically as possible.
That being said, a 100x reduction in code size seems like an exaggeration.
The book 'The science of programming' of David Gries is also a nice introduction to writing programs using mathematical logic and reasoning. The book is inspired by the techniques of Dijkstra and Hoare, and written in a way that's easy to follow.
Looking up Stepanov's book, landed on his slide deck about the book.
Interestingly enough, both are about the remainder algorithm, and Gries' mentions a "large program" of "3000 lines", which reminded me of the other comment and the 3000000 to 30000 figure.
I have not yet read "Elements of Programming," but there is another book "From Mathematics to Generic Programming," also co-authored by Alexander Stepanov, which links Abstract Algebra to program design. I found the latter very approachable:
In a Group, every element has an inverse element. If you drop this property, you get a Monoid.[0] Functions under function composition form a Monoid (or Monoidal Category). Brian Beckman did a nice intro video to this at MS Channel 9 a few years back.[1]
Stepanov and Musser were working on generic programming in the 80s[0][1], which predates Haskell. I suspect parametric polymorphism was common in the FP world by that time, but I don't know whether it was used as an organising principle to the extent that Stepanov promotes.
There's a link on the sidebar of Gross's page [1] to an old version of the course site; this lets you download video, audio, notes, and homework. Even if you know everything in Artin's book the lectures are rarely boring.
Do you mean Michael Artin's Algebra? Just took his Algebra I (18.701) at MIT last semester. An incredibly challenging class, but he's a really excellent lecturer and a very likeable guy. Still really sharp.
The common trend in educational videos is very interesting. The videos that are made approximately from 2-10 minute long with animation and pictures are very easy to follow than to watch hour long stanford or mit live class room recordings.
Often, though, hour long lectures is the only way to go. You just can't learn much in 10 minutes with an animated video. Sometimes you just got to delve in.
I think watching something like this before watching a deep dive or reading a text book could be useful, just to give you kind of a lay of the land and some intuition.
I disagree very rarely does one explanation take that long. It can normally be built up by lots of little videos. Once someone has completely 10 videos on the basics you can do slightly longer ones that span the concepts and explains tradeoffs between techniques.
I feel that the hidden trick to the live classroom recordings is to watch at 2x-4x speed and rewind and go back to 1x the moment you don't understand something or if you are explicitly following a worked example (even then I usually cruise at 2x). If you have pause you can just work at a different pace.
Évariste Galois, a major contributor to abstract algebra, had an interesting life. He had participated in the French Revolution of 1830; worked on mathematics while in jail; and died when he was 20 years old as a result of being shot in the abdomen in a duel.
Great explainer. Though I'm still not sure what the relation is between abstract algebra and regular algebra, other than the application of higher dimensional equations.
Abstract algebra is often about studying structure on sets. That usually takes the form of studying operator(s) with certain properties and the elements that they work with. Some early examples that students see are things like rotations/reflections of n-gons, modular arithmetic, polynomials, matrix spaces, and fields.
It is different from regular algebra in the sense that often one starts with a set and operation one supposes to have a particular kind of property (perhaps it's a group, or perhaps it's a field, or perhaps it's a certain kind of field (maybe separable, or local, or ...)). Then from these assumptions one proves facts about the assumed algebraic structure.
It's different in that often particular elements aren't of much interest, and it's not as tied down to a particular context.
https://smile.amazon.com/Elements-Programming-Alexander-Step...
The main selling point of this approach is that it can cut out a lot of code
In this talk, Sean Parent, at that time working on Adobe Photoshop, estimated that the PS codebase could be reduced from 3,000,000 LOC to 30,000 LOC (=100x!!) if they followed ideas from the book https://www.youtube.com/watch?v=4moyKUHApq4&t=10s
Another point of his is that the explosion of written code we are seeing isn't sustainable and that so much of this code is algorithms or data structures with overlapping functionalities. As the codebases grow, and these functionalities diverge even further, pulling the reigns in on the chaos becomes gradually impossible.
Bjarne Stroustrup (aka the C++ OG) gave this book five stars on Amazon (in what is his one and only Amazon product review lol).
https://smile.amazon.com/review/R1MG7U1LR7FK6/