Hacker News new | past | comments | ask | show | jobs | submit login

And this is what exponents replaced. I think mathematical notation is like vim, it's easy to use but hard to learn. For example, I cannot imagine having to express perturbative series in this notation or any serious physical model with the "wordy" descriptions from history.



Things like this, or the advantages of Arabic numerals instead of Roman, make me wonder what sort of mathematical insights we're missing out on today due to our current notation.


Sure. Any choice might have some downside. One thing very interesting I always think about is, as I hinted at in my post, is the concept of series. Consider this example

  1 + c_1 a^2  + c_2 a^3 + ...
Common syntax makes multiplication between factors implicit, basically grouping factors together in a compact form. The exponents further compactify the visual presentation. This, in a way, makes each term in the series stand alone and appear as a unit. It seems then no coincidence that in physics and math we often consider partial sums as approximations, and concentrate on particular terms seperately. In physics for example, people think of "order \alpha^2 terms" or "higher order corrections" or physical quantities which are often taylor series cut off at some order, of course, assuming the quantity. A very familiar example of this are Feynman diagrams which are fancy taylor series in powers of force coupling constants.

One wonders which came first. It may be that the notation followed this focus on individual terms, but it is interesting that no one considers "expanding" transition amplitudes in infinite products or continued fractions. Also, there certainly exists much more knowledge (theorems, technology) around series than products AFAIK. Again, I don't know which came first because I don't know too much about math history, but it seems reasonable that the causation may be reversed.


> It seems then no coincidence that in physics and math we often consider partial sums as approximations, and concentrate on particular terms seperately. In physics for example, people think of "order \alpha^2 terms" or "higher order corrections" or physical quantities which are often taylor series cut off at some order, of course, assuming the quantity.

There are very good reasons for this. The whole observation is basically the same phenomenon as someone observing, "you know, 34,825,119,276 and 35,174,884,395 are basically the same number for my purposes; I'll just call it 3e10 or, if I'm being really fancy, 3.5e10".

In these applications, the series variable is a very small number. The higher exponents given to it in later terms of a taylor series cause those terms to be very small compared to the early terms. That's why we take the early terms as an approximation to the whole thing -- we have chosen our representation so that that will be true.

(This is the entire reason for Taylor series in the first place -- a Taylor series is a Maclaurin series adjusted so that the variable can be small for purposes of the series, no matter what its absolute value might be.)


Yes, I think the parent realises that. The point I think is that this stuff would be much harder to realise if we used verbose and prosaic descriptions instead of the visually suggestive modern notation (implicit multiplication etc)


As I read the parent comment, it suggests that the reason we think of the first few terms of a Taylor series as approximating the whole thing is that the notation suggests that the series is composed of a sequence of discrete terms. We know that this is wrong; Taylor series were developed so that the first few terms would approximate the whole.


The notation came from trying to write down short polynomials, I think (quadratics, cubics, etc). Which before algebraic notation, was a huge pain in the butt.

Series notation is just "this is a like a polynomial, but it goes on forever, so here are the first few terms."


By the way, this system of naming exponents looks a lot like Roman numerals to me. Pre-pending a name = raise to higher powers.

It's a bit more ad hoc, because pre-pending a name multiplies the exponent (rather than adding, as Roman numerals do) -- making primes impossible to express. So they need the notion of a "sursolid" to express exponents (like 5 or 7) that do not factor into twos and threes.

So, amazingly, he was able to create a system more ad hoc than Roman numerals.


Vim and mathematical notation are efficient for a power user, but I'm not convinced that "easy to use" and "efficient to use" are the same thing.


For any notation, there are trade offs between ease of learning and power after learning. This is the same for mathematical notations as it is for programming languages. If you can effectively subsidize a complex notation/language such that everyone knows it (for example, through primary education), it pays dividends in many locations. There's a reason we use '+' for addition in many programming languages, and it's because it's familiar to everyone. Why is that? Because we all learned it long ago and have been using it ever since.


This is the difference between easy and simple. Easy to use is often hard to learn. This is simple. Many of the best tools are this way. Violins are another great example. I like clojure for a similar reason: Rich Hickey explaining the difference at rails conf: http://m.youtube.com/watch?v=rI8tNMsozo0


I look at it kind of like Perl. You can express ideas in very compact and powerful ways, but the more clever you get the harder it is for other people to read what you've written. Eventually people start to complain that you're working in some sort of write only language that looks like gibberish to all but a small handful of experts.


Perhaps "efficient to use" was the more correct phrase for me to use. As a physicist, I can definitely tell you it's easier than zenzizenzizenzic.


I'm always in awe of what the ancient Greeks managed to figure out with nearly no mathematical notation at all.

Or Calculus in Newton's book. The techniques were sound, but you'd have to be a superman to work the way he did.

Mathematical notation allows for better "chunking" and reduces cognitive load.

[1] https://en.wikipedia.org/wiki/Chunking_(psychology)


Which book, the Principia Mathematica Philosophiae Naturalis?

The thing is, he worked really hard to avoid calculus because it was too new and not widely accepted. Whenever he could give an argument without calculus, he would. It is hard to read but because he's trying to write calculus in the style of Euclid. So most proofs that would involve limits or derivatives would be written in a really roundabout way in terms more familiar to people at the time used to Euclid's geometry. This style of argument survives today in relics such as the geometric proof that lim sin(x)/x = 1 as x -> 0:

http://math.stackexchange.com/a/75151

Not only supermen were able to read the Principia, as obviously it was read and its ideas spread far and wide, but perhaps modern supermen would be required in order to see the actual calculus behind the veil of Euclid that Newton had to cast it at the time.

I actually do read an English translation of the Principia from time to time for bedtime reading, and it's not that impenetrable.


And yet James Clark Maxwell expressed EM Theory without vector calculus! I guess they had more time back then, without TV or iPhones to soak up their day.


Scientists today are inventing things 1000x more difficult, and any undergrad understands Maxwell's equations better than he did. That ridiculous quip is moot.


Yeah but: Maxwell's original was an absolute mess.


Right. Now what was "deep math" in 1860 is learned by every physics undergrad the world over. I would count vector calculus notation as helpful in that regard in the same way that gamma matrix technology helps graduate students grasp spin-1/2 transition amplitudes. Imagine doing these writing out the explicit matrices!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: