Symbolic math is not the same thing as formalized math, and doesn't carry with it the same guarantees of correctness. That's the price one pays for having black box functions spit out answers to hard problems in a reasonable time, they won't spit out proofs of correctness alongside their answer.
Any symbolic system should be treated with care, but they can of course be extremely useful. My typical use case is to have it compute very challenging symbolic expressions for me. I then treat it as a "very plausible hypothesis" which I then prove.
That's only true for closed source systems. You can easily debug open source ones, look at their insides and so on.
This whole article is really just telling people to use the scientific method in computing too: if you don't know how to reproduce a result it's not science. Fast computation which a human can't possibly do is something that's new from the last 30 years and most scientists still don't know how to deal with it.
It is very sad to see that statisticians have switched to R, biologists to python, computer scientists to the gnu tool chain, yet physics and maths seem to have been colonized by mathematica when you have a whole set of open source tools which are superior in every way: pari pg, gnu arbitrary precision libraries, axiom, maxima and sage is you want everything under one roof with a unified interface.
As a former physicist that used Mathematica heavily not that long ago, I have to disagree. It would be great if there were open source alternatives that were "superior in every way" but that is just not the case.
For some use cases at least, Mathematica is clearly better than anything else I have tried.
I feel like something like ipython notebooks with the right combination of libraries might eventually get there, but that is unfortunately still years away.
As another physicist who used mathematica what you mean is that you were too lazy to think about what assumptions were made implicitly in the calculations you were performing and liked mathematica because it did them automagically for you.
This is shown no where better in the paper than when they try to calculate Integrate[Exp[-pt](Sinh[t])ˆ3, {t, 0,
Infinity}]. In a good cas, such as maxima, you will get no result and a ton of errors. Which is what you should get without specifying what sort of variable p is, is it a matrix, polynomial, group of some sort? That it's implicitly assumed to be a real number that might turn out to be complex under some circumstances isn't a feature, it's a bug.
I am genuinely surprise at how confidently you (quite wrongly) diagnosed my problem.
What I did mean, was that for my use cases, I found Mathematica was superior to the alternatives. You are welcome to think this was because I was lazy or misguided, but it was definitely not because I liked not having to specify domains for my variables (and I do remember having to write Assuming[p>0 && x \in Reals, ...] and the like often, it definitely does not assume everything is a positive real).
One thing I used Mathematica a lot for was numerical integration of ODEs (that had been derived using the CAS part of Mathematica and had some pretty nasty coefficients). NDSolve in my experience was just better than competition. You can definitely get nonsense out of it, but with a modicum of care it works incredibly well.
One of the major driving forces in the design of computer proof systems is that they (a) output proofs along with assertions that are (b) trivially verifiable. Indeed, verification is a major philosophical and practical force behind the theory of any type theory.
Ultimately, this is a major component of the design of a proof checker because while lots of its pieces can be complex and scary and potentially buggy, the proof verifier must always be tiny and easy and impeccable because it bears the entire burden of safety.
A similar argument cannot really be made for CAS, I think. Even open source ones.
(There's a catchy name for this principle, but I forget it. "Somebody's Something")
What? This is a software bug. Open source software has bugs just as closed source software has bugs.
The average user of sage is not going to go hunting for a bug in sage's source, whether it's available or not, which is just the same as with Mathematica. Examining an unfamiliar code base will be of no help when you want to know why a certain integral was evaluated incorrectly.
> open source tools which are superior in every way
They are not superior. At best they are equivalent. What you list is a number of disparate tools that may or may not cooperate together well. What Mathematica provides by way of competition is a set of reasonably polished packages, all in one place.
Mathematica also makes it possible to quickly write a one-liner that solves a problem and lets you move along. This experience is simply not there for tools like sage. Sage's plotting (matplotlib, IIRC) is just not comparable.
The average user of sage is a phd in a field that uses maths extensively. I really don't see how you can think that someone like that is dumb enough to not be able to debug a python function call to the point where something went wrong. It is literally one days worth of work to learn how to use the python debugging tools and it's largely trivial compared to the everyday work you do.
I think that's not right at all. Sage is a fairly massive project, most of which is implemented in separate libraries (not necessarily in python) that are then bound together under sage. To say that you need to know how to debug a python function call is a great understatement.
The function call is not the issue. The issue is that it is a large unfamiliar code base, and the bug, as in this case, would not be an incorrect None somewhere, but will be a mathematical bug somewhere, like an incorrectly written formula. Chasing that is far far more difficult.
Bearing in mind also that the average phd mathematician has at best cursory knowledge of programming and software engineering, enough to get on with mathematics, expecting them to dive into sage and fix things is just unrealistic.
> it's largely trivial compared to the everyday work you do
I think this is also a very unrealistic expectation of users, however skilled they may be as mathematicians.
I imagine a bug in a CAS software can be a few orders of magnitude more difficult to solve than just figuring out how to use python debugging tools. Software development is usually a burdensome task, have some humility, please.
Users don't fix bugs in open source software. For evidence, look at OpenOffice and LibreOffice, both riddled with some horrifying bugs that have been repeatedly documented for years. There's even a guy complaining about one well-known bug in a TED talk. Yet somehow none of their millions of users have fixed them.
These two programs are a prime example of how open source is just as inaccessible as closed source. It's simply too difficult to learn a large complex codebase. People have their own jobs to get on with.
"Users don't fix bugs in open source software."
The serious research-mathematics users of Sage often do fix bugs in Sage, and contribute fixes back. This is one reason a typical Sage release has well over 100 contributors, and we have had overall about 500 contributors to Sage (see http://trac.sagemath.org/). There's a huge difference in programming skills between typical research mathematics Sage users and OpenOffice users, because all such Sage users are programmers, and the language they use to interact with Sage is the language it is mostly written in. Yes, Sage has subcomponents in other languages, but an enormous amount -- maybe the majority by now -- of Sage is in Python and Cython. Also, successful mathematicians are extremely intense and dogged in pursuing something they get passionate about. Often they will devote a decade or more to attacking a problem, so spending a few days learning Cython (say) and debugging code is relatively little time in comparison to the overall time they devote to a problem. Anyway, I'm glad that when I started Sage I didn't believe the statement "Users don't fix bugs in open source software" applies universally. I didn't know either way, so I waited to see, and was genuinely surprised at how false that statement actually is in the case of Sage.
Yeah, LibreOffice has a few issues, but they don't punch me in the face daily like I surprisingly found MS Word does recently. After 14 years of using soffice -> OpenOffice -> LibreOffice simply out of not wanting to keep a windows box/VM, my recent 1 week experience with Word opened my eyes to just how glitchy the other side of the fence still is. An undo stack that doesn't. Image placement that constantly explodes like it's 1997. Press print and my citations/references decide they'll include neighbouring paragraphs instead of just referenced header text. And just try placing two images next to each other without making tables first... This is a trivial operation in libre office. I was also amazed to find that libre office far more readily provides consistent editing experience of various objects across the suite: paste an excel sheet into word? Nope, that'll make crappy exploding word tables. Paste drawing from Visio? Might as well be a JPEG now, because you can't edit that from the word doc. I also really missed the anchor icon that OO/LO provides to indicate where exactly in the text flow you're positioning an object. Sorry for the rant...
"if you don't know how to reproduce a result it's not science."
Astronomy is one of many sciences which can never reproduce its results in the way you're talking about. Do you not consider astronomy to be a science?
In any case, this paper was looking for counter-examples. If found, then they would be all the evidence that's needed. The method to compute the counter-examples is nice, but irrelevant.
You have astronomy as scinece confused with astronomy as historical records. I can't recreate an eclipse in 500 CE but I can calculate when another like it will happen in the same place it did.
The dynamic instability of the solar system puts a limit on that predictability.
In any case, tell me about how you can predict gamma-ray bursts. When will the next detection of extrasolar neutrinos occur? How do we recreate the Big Bang?
I am not confused. I regard historical sciences like astronomy, geology, paleontology and archaeology equally part of science even though there isn't the high level of reproducibility of, say, most chemistry.
I also regard nuclear bomb physics to be a science, even though by law it's impossible to reproduce those tests.
So get better telescopes for the tolerance you want. We have no problems making predictions over thousands of years with current technology when it comes to stellar mechanics that agree extremely well with the historical record.
Yours seems to be a very medieval mindset. Just because we are ignorant of the initial conditions of a system doesn't mean we are ignorant of the equations by which it evolves. And the example of chemistry is just bizarre. If we couldn't reproduce the same reaction down to the atom time and time again our silicone based infrastructure would have filed a very long time ago. Similarly for nuclear bomb tests, if they were truly irreproducible then things like [1] should happen a lot more often than not.
The solar system is chaotic in the sense that no matter how well you can measure their positions, their future evolution, at about 20 million years in the future, is not predictable. We will never be able to predict an eclipse that far ahead of time. We won't even know when the seasons are.
You claim that I have a medieval mindset. It seems you espouse a 19th century view of science, of the clockwork universe.
Science doesn't require reproducibility, nor your weaker requirement of "how to reproduce."
Predictability is the key to science, not reproducibility, though of course those are inexorably tied when it's possible to reproduce something. We can predict that fossils of a certain type will only be found in a specific layer of geological strata. We can predict that hurricanes will be created by and affected by certain wind patterns. We can predict that radioactive atoms will spontaneously decay.
Even though we certainly cannot reproduce those.
Chemistry is a field where it's easy to set up very similar conditions to previous experiments ("reproduce") and where the expected confidence of predictability is quite high. Hence my use of it as an example. It's much harder to reproduce an observation of a supernova, but we have pretty high confidence that when we do see one it will follow certain patterns.
Give me enough money for telescopes, detectors and computers and it is trivial to calculate the motions of the solar system for as long as you want to whatever precision you require.
The resources might be beyond the capabilities of humanity to ever achieve, but that detracts nothing from the point made. "Chaotic systems" aren't "unsolvable systems". The idea that they aren't reproducible to any desired degree of accuracy is laughable.
To me it means that no matter how precise you measure everything, at some point even the unpredictability of a single atomic decay is enough to make a difference such that one of the planets may is ejected. As far as I know, that's also the generally accepted meaning.
"In 1989, Jacques Laskar of the Bureau des Longitudes in Paris published the results of his numerical integration of the Solar System over 200 million years. These were not the full equations of motion, but rather averaged equations along the lines of those used by Laplace. Laskar's work showed that the Earth's orbit (as well as the orbits of all the inner planets) is chaotic and that an error as small as 15 metres in measuring the position of the Earth today would make it impossible to predict where the Earth would be in its orbit in just over 100 million years' time."
The observation you quoted isn't using the full equations of motion. The next study on that page uses a change of 1 meter and finds that 1% of the 2501 cases Mercury goes into a dangerous orbit, including one where "a subsequent decrease in Mercury’s eccentricity induces a transfer of angular momentum from the giant planets that destabilizes all the terrestrial planets ~3.34 Gyr from now, with possible collisions of Mercury, Mars or Venus with the Earth."
You assert, seemingly as a matter of faith, that it is possible to measure all of the relevant factors such that a prediction can be made. We can't predict when an atom of uranium will decay, but we can make statistical predictions about the population. We can't predict when an air molecule out of a mole of molecule will hit the side of a bottle but we can make predictions about the pressure.
Do you think that we can ever do either of those two cases?
It's the same for the Solar System. As far as we can tell, it's not possible to have accurate enough information to predict the evolution of the Solar System. Even with numerical simulations, the presence of a space craft, or an extra-solar meteorite, might change things after 10 million years - things that can't be predicted.
Any symbolic system should be treated with care, but they can of course be extremely useful. My typical use case is to have it compute very challenging symbolic expressions for me. I then treat it as a "very plausible hypothesis" which I then prove.