Hacker News new | past | comments | ask | show | jobs | submit login
Eigensolutions: Composability as the Antidote to Overfit (verou.me)
39 points by Caseee on Dec 23, 2023 | hide | past | favorite | 20 comments



The use of "eigen" here seems to have stretched too far beyond its original meaning in mathematics, and in German more generally. In German, "eigen" means "own" like "my own idea." In mathematics, "eigenvalue" is a certain numeric value associated to a matrix or linear operator, which is independent of various representations of that matrix or operator. In that sense, the number is the operator's "own" value, in the sense of it being an invariant and characteristic property of the underlying concept, rather than arising from how it's viewed in a particular frame.

The author borrows the prefix from the "eigenquestions" article linked therein, where that author defines it as

> "Eigenquestion" is a made-up-word that borrows from the linear algebra concept of eigenvectors (mathematically: represents the "most discriminating vector in a multidimensional space"). ... For a simplistic definition, the eigenquestion is the question where, if answered, it likely answers the subsequent questions as well. Great framing starts by searching for the most discriminating question of a set — the eigenquestion.

But the "most discriminating vector" idea is not really what an eigenvector is. It can be roughly thought of as that in some senses, like principal component analysis or SVD, where the term has been used to refer to a particular vector space basis formed of eigenvectors (e.g., "eigenfaces"). But in this context they authors are simplifying this to just "the most important question." The use of the fancy math jargon doesn't seem to add anything beyond this. It feels like a kind of faux intellectualism that I have learned to dislike (despite having a math PhD myself!) after hearing too many tech bro types abuse math terms beyond recognition.


Agree the analogy is inappropriate and (unintentionally?) deceptive. That is, it's used to make you feel the argument is more correct than it is by borrowing the trust we have in "fancy" ideas from math/physics.

The whole blog _looks_ like a scientific paper with citations, quotes, diagrams and jargon, but none of the arguments made really have any evidence and wind up being circular.


On the contrary, I've found the article quite refreshing.

Using eigen isn't such a big issue if you understand it as the eigen in eigenvector.

It's about implementing orthogonal features i.e. Solving non overlapping concerns. Having one way of doing things. Make things easier, less cluttered.

The exposé is right on the money and actually even explains it.

Perhaps the faux intellectualism is on the other side... <_<


> Shishir Mehrotra (of Coda) wrote about the importance of “Eigenquestions” when framing problems, a term he coined, inspired from his math background:

> the eigenquestion is the question where, if answered, it likely answers the subsequent questions as well.

> This inspired me to name a symmetrical concept I’ve been pondering for a while: Eigensolutions. The eigensolution is a solution that addresses several key use cases, that previously appeared unrelated.

The original article is a complete muddle. It misdescribes framing, and then adds in a misdescription of eigenvectors. It's spending a lot of time and effort on misuse of language to no benefit -- plausibly the motivations were pseudo-intellectual, or at least, intellectually lazy.

The reason it has to do this is obvious: describing breaking problems down, framing problems, and asking fundamental questions isnt new; and no person discussing that seems impressive or a genius.

Throw in a few half-baked borrowed notions from rhetoric and mathematics, however, then it all seems so much more vital.

The whole thing is an exercise in writing 5x as much to say 1/2 of what's needed and playing to a dumb audience ready to lap it up. The dumb audience in both cases are non-tech manager types, who're endlessly desperate to acquire technical language to seem in-the-know.


> The whole thing is an exercise in writing 5x as much to say 1/2 of what's needed

That thing right there is what I despise most in non-fiction/non-recreational reading!

A good text says what it wants to say in as few words as possible while remaining easily intelligible to the intended audience.

Clarity & conciseness are the central virtues!


Maybe you could try to be a bit charitable and understand what they meant because I don't think that is so bad.

Of course it could have been said in other, simpler terms perhaps, but that can be attributed to stylistic choices. I'm not too offended by that.

It's rare that people rethink something in terms of (multi) linear algebra, it can be a good reframing for an idea.


This is a very charitable interpretation. I got the impression that the author didn't know what "eigen" meant and borrowed it because it sounded good, and it was a stretch in the original context as well. That doesn't mean it can't be reframed in a way that makes sense, but I'm not seeing what you're seeing in the author's intent.


The way I see it, the author might be trying to model the process of solving a design problem in terms of PCA decomposition vs a mere multivariate regression.

Certainly in vogue given the focus on data analysis, AI/machine learning lately.

Doesn't seem too far-fetched. Ok some things could be better explained but overall the article is still nice in my opinion.

Found that entertaining.


>Perhaps the faux intellectualism is on the other side... <_<

It really isn't. Nomenclatures in the sciences and other fields are used for the purpose of clarity and concision; they make communication more effective. For example, when a mathematician mentions an Eigenvalue, other mathematicians know specifically what this object is. That is the point.

When people borrow words from these nomenclatures and change their meaning, it makes their writing pretentious and bloated, not to mention less clear.


That's really not how languages work.

Even between different scientific disciplines.

For example, what is covariance?

Besides, the criticism was overly pedantic, but alas. The explanation wasn't even that bad.

Decomposition of a design problem in a set of orthogonal questions is exactly what is being described. Solving then the problem by providing solutions that don't overlap is also what is being described.

And the author is right about mentioning that because it's not actually obvious that giving a solution to each "eigenquestions" provide a canonical basis for the solution space, so to speak.

Really it seems that people want to be pedantic just to be pedantic... I don't think such arrogance is warranted here.


Eigenvalues have nothing to do with orthogonality.

You are further demonstrating the point that mixing this math language simply creates confusion.


What's an eigenvalue without the corresponding eigenvector...


If you simply say "orthogonal features" you won't have to explain "eigenquestions".


Nice thread! Finally!

I 'know' Lea Verou since ~2010. She is a household name in web design and development. Like A List Apart, Chris Coyier, Eric Meyer.

I've read regularly her articles and used her tools for years. I'm not in web design and development anymore; more like category theory, functional programming - thus composition.

The title caught my eye. I wasn't expecting these words in a web design and development article; wasn't expecting these words around 'composability'. I read papers almost daily and I'm used with 'strange new' words.

I read also the article. It's far from being enjoyable, clear, useful, linking explicitly the term composition to this domain. Maybe we should step back a little bit and try to simplify things. They 'compose' better.


Is the author just trying to say good design solutions subsume poor ones?


>Rather than designing a solution to address only our driving use cases, step back and ask yourself: can we design a solution as a composition of smaller, more general features, that could be used together to address a broader set of use cases?

Analogy: when painting a scene, instead of buying every specific color you need, build a palette of a few colors and mix them together as you paint to create the colors you need.

In Excel, VLOOKUP is the overfly feature and INDEX(MATCH…) is the higher level abstraction using the lower level primitives


Overfit*


This is an unfocused post. There are too many side notes, fancy alleys into html markup and too many side trips into subtopics abstractly related by use case analysis.

After reading stuff like this for a year, a text book or professor's lecture will start looking seriously good.


I like the ideas in the article, and they are generalizable beyond user interface design to programming languages, frameworks, and technologies. Some of my thoughts:

1. Christopher Alexander's ideas on pattern languages essentially talks about what happens when you have composability. You can now analyze or construct how composition happens with a grammar. If you construct the grammar such that all possible designs composed from the grammar is a cohesive, useful design, then anyone -- including the end users of a technology or the inhabitants of an architecture -- can make changes. His keynote speech to OOPSLA '96 is a good introduction to applying his work outside of building architecture (https://www.patternlanguage.com/archive/ieee.html) and his work had inspired the field of Human Computing Interface and computer architecture

2. Ruby on Rails is a good case study for these ideas. Rails is an opinionated framework, initially constrained to a curated selection of "opinions" that are useful for creating web applications. The Rails 3 refactor took apart Rails and decomposed it into primitives (Russian Doll Pattern https://www.infoq.com/presentations/katz-rails3/). And then, those opinions are recomposed using those primitives ... and for anyone who knows this, they can create their own opinionated framework out of those primitives (I did just that with a personal project, intermodal https://rubygems.org/gems/intermodal/versions/0.4.0). This is the "layering" discussed in the article.

3. Kubernetes is a good example of composable self-healing processes. Kubernetes enables higher-level abstractions of different PAASes, but anything built on top of Kubernetes primitives can be modified, and extended. By itself, Kubernetes has a very high floor (good plumbing is hard, and invisible when done well), and there is still a lot of room for products built on Kubernetes with a much lower floor.

4. Erlang's BEAM has a selection of very powerful concurrency primitives (1st class processes as actors; mostly immutable data structures). OTP are higher-level often-used shortcuts composed from those concurrency primitives (GenServer, Supervisor, etc.). Many applications requiring concurrency can be expressed as a composition of OTP patterns.

5. Elixir is built on top of BEAM. The language itself started with a set of barebones primitives, including macros. The syntactic sugar (shortcuts) were built from macros, and the Elixir community slowly developed useful syntax out of that. Other languages with similar capabilities include Lisp, FORTH, and Tcl.


I can't help but keep getting distracted by C++ on the chart. While it has a few design missteps, it seems to me to be very much the kind of unicorn the author is talking about. In the day-to-day and for beginners it has good abstractions, but it always reminds you about its power and flexibility under the hood that you tap into for the occasional specialized use case or library creation.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: