I use what I thought was dimensional analysis very often after learning about it 10+ years ago during my Physics studies. It's a very powerful technique to both check that your equations are correct (i.e. do the same equations but just with the concrete dimensions and you should end up matching with the dimensions of your result) and checking you have correctly carried the magnitudes (kilo, giga, etc.).
I've never seen dimensional analysis without using concrete dimensions, so this blog post was really eye opening. It seems a little magic to me, so I'm interested to learn more about a quick and helpful technique I thought I mastered.
Yeah, in this case it's a bit misplaced. The first thing your high school calculus course would have you do would be called “u-substitution”, and if you did directly u = x √(a), du = √(a) dx you would get the same immediately, or if you did something stranger like u = a x², dx = ½ du/√(a u) you would get
(1/√a) ∫ exp(-u) u^-½ du
which furnishes a handy proof that (-½)! = √π, once you know the Gaussian integral.
I'm glad we had a similar reaction because I went in with the mindset of concrete dimensions. It helped me a lot in Chemistry classes with all those mols.
Another good example of this is the relationship between the period of a pendulum (T), its length (L), and the acceleration due to gravity (g). Since L is measured in m(eters), T is in s(econds), and g is m/s^2, there is only one way to combine these quantities that will give the right units : T ~ sqrt(L/g). And that's the right answer, up to a constant.
To calculate the resistance of a specific piece of wire, plug in the intrinsic property of copper (resistivity), and the dimensions of the specific wire. Out comes resistance.
Ω(wire) = ρ(copper) × length ÷ area
Where things get weird are thin films, ICs, and pcb traces with constant thicknes.
The unit of sheet resistance is ohms per square.
Because resistance increases with length and decreases with width, multiplying both by the same number doesn't change the resistance. Any square cut from the same sheet of copper has the same resistance between two opposite edges.
I think there's an interesting parallel to this usefulness in code which the author just briefly mentioned. Static typing can help bridge thoughts about what makes sense to do ("I have a callable with signature (x: int, s: str) -> str, and this function I need to use is (s: str) -> str; huh, my x here is fixed, does it make sense to make a partially applied function? yes, that'll work") as well as, of course, verifying all assumptions make sense at some level (types matching is a necessary, though not sufficient, condition).
Then there's more power unlocked by giving dimensions/units to your types/instances. For instance, overloading the division operator in a class representing length so that when dividing length by time, you get speed. Or, one could overload addition/subtraction in classes representing a currency so that trying to add ¥100 to $100 raises an exception.
In general I think there's a lot of interesting applications for the information that comes along with numbers that we usually just discard. Things in the real world aren't dimensionless that often, and yet our code almost always treats them as if they were.
> Then there's more power unlocked by giving dimensions/units to your types/instances. For instance, overloading the division operator in a class representing length so that when dividing length by time, you get speed. Or, one could overload addition/subtraction in classes representing a currency so that trying to add ¥100 to $100 raises an exception.
I sometimes even go one step further and use specific types for dimensionless factors in financial calculations, such as specific types for the rates of different kinds of taxes. This way I get a compiler error, when I am trying to use the wrong type of tax rate in a calculation.
I had a similar experience in university. I used to take advantage of our mathcad licence. If you specified units for the variables, you could identify mistakes in your equations and functions very quickly. For example, if you wanted meters, but you got m^(2/3), it was usually obvious where in the equation something went wrong.
Another bonus was I could focus on the equations and then output the final answer in either metric or franken-imperial, as the homework assignment required (I did my undergrad in the US.)
> If you specified units for the variables, you could identify mistakes in your equations and functions very quickly. For example, if you wanted meters, but you got m^(2/3), it was usually obvious where in the equation something went wrong.
You can take it even one step further and write tests to check the resulting units for you.
You could take it a step further and specify the domain and range of your functions. The division operator for example take any real number except 0 – Zero is a special undefined case specifically for division.
Although string types seems intractible, you could easily specify things like [a-zA-Z0-9], or character sets/UTF ranges for limiting inputs to certain languages. All list types (strings, arrays) could specify length as a limit.
Programmers would naturally want flexibility with domain enforcement, so could just write a (turing complete) function to which returns the domain of the function in question!
Some languages feature support for this. F# has it built-in and refers to it as units of measure. Julia has it available as a package named Unitful.jl.
One of my teachers when studying Physics spent some classes teaching us about dimensional analysis, and since then it has been one of my favourite tools. I remember solving an electromagnetism multiple choice test using dimensional analysis, it felt almost like cheating.
A few years ago, I wrote a paper with other collaborators[1] on how to use dimensional analysis as a way to improve feature selection, that can in turn improve machine learning performance for physical/chemical systems, especially those with small datasets. Full manuscript is available through researchgate[2]
This is interesting! Where can I find more information about people using ML models for the physical world that is not drug discovery or anything about proteins?
if you look into sciml stuff in the Julia community there's a bunch. some example use cases are trajectory optimization, and accelerating climate simulation. often these ML models are referred to as "surrogate models". let me know if you have more questions, I can probably point you in the right direction.
Dimensional analysis is one of the most underrated techniques out there. All too often we are taught how to solve equations, but it is equally important to think about how to reason with equations and how to examine the properties of solutions without having to even solve the equations themselves. Sometimes it becomes another quick check you can use to spot careless mistakes, but sometimes you can also use it for much deeper insight, like Kolmogorov's theory of turbulence. I wish more people were aware of it.
The book this blog post is based on -- Sanjoy Mahajan's "Street-fighting Mathematics" -- is a fantastic read. It is a bit like watching cirque du soleil; I don't come away having internalised much because I have no practice thinking that way, but I come away hugely entertained!
That wonderful book is basically upper college level dimensional analysis.
It unfortunately skips the basic level that you learn a little about in high school.
This makes sense because the book was originally a PhD thesis about solving research level problems with dimensional analysis, so the easy problems were already solved, but makes it a bit of steep start for the casual reader who doesn't already understand the basic idea.
I understand the basic idea of dimensional analysis well enough in the context of college level physics. But that is only one part of the book; it is a catalog of impressive tricks (approximation, analogy, visual proofs) that Mahajan has accumulated over a long duration.
The review on ams.org (by a structural engineer, not a mathematician) mirrors my difficulty (and fascination) with the book.
Ha! Seeing the headline, I was going to say that this is basically the intro to Street Fighting Mathematics. Reading it, I thought I was just having deja vu with how similar the treatment was. Fun book.
I always think of dimensional reduction when I see these headlines, which is sadly very unrelated.
> I always think of dimensional reduction when I see these headlines, which is sadly very unrelated.
Actually, the two are related. The Buckingham pi theorem of dimensional analysis guarantees the reduction of the number of variables ("dimension" in a different sense) in many instances.
As someone commented, this example is not so mysterious. It's just a change of variables to make the argument of exp() dimensionless.
Here's a good example of the power of dimensional analysis: how small should the Earth be to collapse into a black hole?
Knowing nothing about the problem, you know it should involve at least two things: the mass of the Earth, M, and the gravitational constant G.
Since F=ma and F=GMm/d^2, we know that GM has units of distance^2 * acceleration (check that). This is equal to distance * (distance/time)^2.
We want a radius, which is a distance. And we almost have it! At least if we can get rid of the (distance/time)^2 factor. But that's a velocity^2! Now, what's a velocity that should be natural in questions about black holes and general relativity? Why the speed of light c, of course.
In fact this can also be understood "just" as a simple change of variables in either Newton's law (as you have shown) or Einsteins' field equations, to dimensionless quantities!
Lol yes, you might as well just say "in this example, simply solving the integral does the trick".
The whole point of this article is that you can use dimensional analysis to get the form of the answer up to a dimensionless constant. You don't have to know anything about calculus except that dx has the same units as x, and that the integral sign is an additive compounding operation, i.e. it does not change the dimensions of it's argument.
Using a change of variables (the standard trick to solve the integral) still requires you to know 1) how does the differential change under "u-substitution", 2) that the derivative of exp is itself, 3) the chain rule and 4) the fundamental theorem of calculus, which relates definite integrals to the antiderivative of the integrand. In other words, you have to do calculus.
I've done a lot of philosophizing on the nature of units and there's a bit of an interesting wriggle there. Most people think "length" and "time" are different things, such that 1m + 1s = garbage. Mathy physics people* say "Aha! They're really the same thing, just set c (the speed of light) to 1!" This, then, raises an interesting question: is there anywhere you can't do this? In other words, is there some irreducible set of dimensions by which the universe operates? I tend to think no, since the units always divide out like this article says. But then, why are lengths what they are? Why did the universe settle on this scale? What even is length, and is "this scale" just an artifact of human perception somehow?
* The same mathy physics people also do fun things like define Gaussian units where charge is proportional to fractional powers of length and mass.
Why are cells mostly the same size across all organisms?
Well some things get better with area and some things get worse with volume, and since those grow at different rates, there's an optimal size for which the difference is maximized.
Pretty much any emergent constant size can be explained by the intersection of two functions.
Snowflakes fly up from area and fall down from volume, so there's a maximum size they can stay in a cloud
I think this analysis is great and I like it a lot, but it's not quite what I was going for. Here, the size of a cell works against an independent measurement of length like "one meter". But, at a much more fundamental level, I kind of meant why is a meter a meter? If everything in the universe doubled in length overnight, how would we know since our meter sticks would double, too, which admits two possibilities:
1) If lengths are absolutely fixed, why are they fixed at that scale? Why *didn't* things end up doubled (assuming an external absolute to measure against).
2) If they're *not* absolutely fixed, the "fundamental length" appears pretty fixed to us as humans because things kind of look the same day-to-day. Is there a variation we just don't perceive? Is the fixedness we see just us imposing part of our perception on the world?
It's sort of like this illustration of gauge fixing[1] -- just like "how do you know the cylinder is twisted?", you can ask "how do you know your fundamental lengths?"
IIRC fluid mechanics is where this turns up most obviously.
So the issue here is you don't quite know how certain shape of boat/airplane will behave in a flow of water/air.
But you do know that whatever the relationships are, they have to be dimensionally consistent. You then work out a dimensionless version of force or whatever your interested in, and that tells you how it scales since you have a bunch of square, cube, root, etc terms.
So now you can build a little model, measure the real force you're interested in, and make a guess about what the force would be on the actual size version.
The key here is to have all the relevant dimensionless numbers (e.g. Reynolds number, perhaps Mach number, Froude number, etc.) as well as the geometry (which is also dimensionless: it's a set of angles and ratios), of your wind tunnel model consistent with the real-world situation you're testing. Then you measure other dimensionless quantities (e.g. drag coefficient).
The tests give a correlation from one dimensionless quantity to another.
That can mean, counter-intuitively, that a wind tunnel model needs to have a higher wind speed than the full-size object - so that the Reynolds number is correct.
This approach of assuming some basic things and seeing how far you can get with just those assumptions always makes it surprisingly further than I would have thought in advance.
I remember reading an article once about how if you make some really basic but reasonable assumptions about how the laws of physics should work (e.g. probabilities add up to 1 at all times), you can get almost all the way to the Schrödinger equation. I was quite impressed.
Now that I am interested in large language model and as I begin reading Street Fighting Mathematics, the first thought is that large language model can learn a lot (statistical approximate inference) from dimensional analysis, that is using words like meters and seconds they can predict the formula for the velocity using only words related to physical dimensions
I've never seen dimensional analysis without using concrete dimensions, so this blog post was really eye opening. It seems a little magic to me, so I'm interested to learn more about a quick and helpful technique I thought I mastered.