Though I mention some of these in the post, I think it might be worth calling attention to some of my favorite works in this general space of interactive documents/dynamic notation/interfaces for thinking in the software medium.
- Ken Iverson's "Notation as a tool of thought"
- Bret Victor's works around the general topic of "Dynamic medium"
- D. Englebart's "Augmenting Human Intellect" — in particular, his thoughts on symbolic manipulation technologies and the cascading effects of improving notation (though he doesn't use the word "notation" very much, the ideas are there)
- Ted Chiang's many fiction works exploring topics around language and notation: "Truth of Fact, Truth of Feeling", "Story of Your Life"
- "Instrumental interaction" by Michael Beaudouin-Lafon, which is about software interfaces, but also provides a good conceptual framework for thinking about notation as interfaces as well.
Do you have any favourite examples of situations where notation has enhanced intellect?
(one of my ongoing software-related frustrations is that communicating in text about software behaviour can be a challenge - I don't know whether being able to communicate the potential source of a logical error at a specific line at a given commit/version of a codebase counts as intellect, but it does involve mentally modelling how code changes over time and how best to help navigate someone else to inspect the same thing)
Kronecker products have tricky properties and I was struggling to implement a linear algebra algorithm using some of those properties in a way that would be both correct and efficient.
Using the notations from the paper "On Kronecker Products, Tensor Products and Matrix Differential Calculus" [0], I was able to (trivially!) rederive an algorithm called algorithm 993 [1] and then specialize it for our use case [2].
In particular, it let me show that, knowing the layout of the data in memory, some operations could be replaced by carefully placed implicit reshaping of the matrix (telling the linear algebra kernels that it has a number of rows/columns different from the actual one) which are no-ops.
The notation used is extremely specialized and thus useless to most people / for most problems but it turned a complex problem into a trivial one in a way that felt properly magical.
This above is exactly the example I would give.
Raising and lowering operators and Christoffel Symbols are critical to having an ability to talk about Reimann Curvature (General Relativity).
However, there are many abstract concepts (zero, I, infinity, pi, e) where notation and symbolism can dramatically improve intuition. Similarly, there are notations for different groups, or simply types number systems, N J R C. Hey, why not floats and strings too?
In addition, there are also many functions that we use/define (sin, cosh, B, P Y), which are basis sets for the solution of various differential and partial differential equations.
Failure to use some of these notations makes explaining oneself (even to yourself) almost impossible and using a different notation will frustrate others.
Refactoring other people's messages into terminology that makes (concise) sense locally.. it's translation, I suppose. Nice to offer back any corrections or improvements to the author where possible so that they can improve their own use of notation.
Thank you - I struggle to have a more meaningful mental model of matrices than "int[][]" but I'll read up on this. There are a bunch of cases where I've felt intuitively that "this should be solved with numpy using matrices" -- perhaps time to learn some of that in more detail.
(could you add a bit more detail about any notation in particular that was useful? I see the matrix direct product symbol and have a vague initial understanding of that and block matrices)
Part 2 to 6 of the paper explains the elements of notation I used.
The algorithm I wrote had to do with both the usual matrix product and the kronecker product (in short it lets you compute the Kronecker product of several matrices together time a given matrix much more efficiently and with a significantly lower memory usage).
The notation was useful to me as it surfaces the property Wikipedia describes in the "Matrix Equation" [0] section on the Kronecker product and makes it intuitive.
Don Norman has a rather good example in his book The Design of Everyday Things. He presents a two player game. Each player alternately chooses a number from 1 to 9. Each number can only be chosen at most once. The first player to pick three numbers that add up to exactly 15 wins.
Even writing things down with paper and pencil, this game is hard. However, Don Norman shows how, if you put the numbers into a Tic Tac Toe grid, the game becomes trivial.
Like the original article, I mention in my classes Roman numerals vs Arabic numerals as a good example of how notation can influence things.
The whole field of information visualization is also a great example of how to leverage the power of human vision to easily see patterns and understand data, which overlaps a lot with notation.
Thank you; and is tic-tac-toe really effectively the same as pick-15 (including winning conditions)? That's cool. (I believe it, but will have to do some work to convince myself of it)
The concepts you mention on that page sound useful for respectful product design and I'll add the book to my reading list.
I'd like to offer some kind of value in return, probably reading material, although not sure what to suggest. I guess you might already be familiar with Edward Tufte's books on visual design?
Yes, Tufte's most popular book, The Display of Quantitative Information, is quite good, though he pushes minimalism far too much. Note that Tufte also makes a lot of assertions without evidence to back it up. His philosophy on design is that designers should think really, really hard and just get the design right, completely discounting the value of user testing.
I mention this in passing in the post, but I think the various notations that exist to express finite state machines help us greatly to understand complex software systems.
One way to express FSMs is the general node-and-arrow diagram. That makes sense for simple systems like a traffic light ("red" can only go to "green", etc. etc.) But for more complex state machines, we might want to express it in a regular expression, which is mathematically equivalent and lends itself to a different kind of mental model that might be better for, say, writing a search engine. Another "notation" to represent FSMs is matrices that model state transitions (if you can go from state 1 to state 2, X[1][2] = 1, otherwise it's 0, etc.) This is useful for certain other kinds of questions like "how many steps to go from state A to state B?.
One practical use case of state machines is in parsing theory. If you're trying to parse JSON correctly to the spec, you might care about the formal grammar of JSON, which is often written in a specialized notation called BNF that describes what symbols can come after another. This is useful for implementing a parser, but to understand the grammar itself I personally find railroad diagrams [0] more intuitive. Different notations for the same idea let us be smarter in different use cases.
On a completely different track of ideas: you might be interested in the work from Dion Systems [1] which are doing some interesting research into more dynamic ways to work with source code that I think reflects a lot of the ideas I wrote about. It touches on your specific comment more directly.
Thank you very much - looking at the demos, a lot of the ideas in the Dion Systems demos resonate (code as graph, graph transformations, ...).
And that's a good example re: JSON and railroad diagrams too. If I remember correctly the JSON spec website itself uses a railroad diagram? It does make the grammar visually straightforward to comprehend in a space-efficient way (picture, thousand words I suppose).
I think the Korean writing system is often overlooked when it comes to notation systems. It may look like logogram similar to Chinees but every "character" is actually a syllable where strokes give an abstract representation of the voice box. My understanding is that modern Korean has shifted away from this strict representation, but it is still the easiest writing system that linguists can teach to primitive cultures laking a writing system.
Thanks - it's easy to forget after working with basic datatypes (char, string, ...) for a long time that the individual symbols themselves have their own history and encoded meanings; some retained, some implied, some likely debatable, some perhaps lost.
I wonder whether there have been any efforts to create an etymological dictionary for Unicode characters.
Thanks - your site led me to your GitHub profile and communicating sequential processes. JavaScript async and promises do seem a little tricky to comprehend, that'll make for some good reading.
My take is that notation allows to squeeze more into our limited "thinking memory". It's easy to remember "integral of that plus integral of this equals something", but it's nearly impossible to memorize the expanded definition that uses only basic terms. I guess, good notation is like a box: it lets us move a bunch of things at once.
That makes sense: perhaps "abstraction" is the word? An n-ary operator with a well-understood conceptual effect, and a reasonably small number of arguments linked to it (enough to fit into a person's point-in-time memory as you mention).
(addendum/edit: and hopefully those abstractions are clearly implemented and fixable in practice.. hello, software libraries)
- Ken Iverson's "Notation as a tool of thought"
- Bret Victor's works around the general topic of "Dynamic medium"
- D. Englebart's "Augmenting Human Intellect" — in particular, his thoughts on symbolic manipulation technologies and the cascading effects of improving notation (though he doesn't use the word "notation" very much, the ideas are there)
- Ted Chiang's many fiction works exploring topics around language and notation: "Truth of Fact, Truth of Feeling", "Story of Your Life"
- "Instrumental interaction" by Michael Beaudouin-Lafon, which is about software interfaces, but also provides a good conceptual framework for thinking about notation as interfaces as well.