The idea of "differently sized infinities" has always felt like a mathematical game to me.
The work of Cantor and beyond regarding infinite sets is generally taught as mathematical "fact", as fundamental as 2+2=4, but unlike 2+2=4 is there any practical constraints or applications around it? It seems more like an almost arbitrary puzzle than "fact".
I don't disagree with any of the proofs or reasoning... it just kinda bugs me that the way the formal cardinality of infinite sets is communicated is as the commonsense notion of "size".
When we make statements such as the size of the set of all natural numbers 1, 2, 3... is the same as the size of the set of all natural even numbers 2, 4, 6..., despite the former containing the latter but not vice-versa... it seems the word "size" -- and associated terminology "larger than", "smaller than", etc. -- is a particularly unhelpful set of words to have chosen for this.
The reason I think this is important is because if you go to e.g. the Wikipedia page for "irrational number", it states in the introduction [1]:
> "As a consequence of Cantor's proof that the real numbers are uncountable and the rationals countable, it follows that almost all real numbers are irrational."
But it seems like an unwarranted leap to go from this formal comparison of cardinality of infinite sets, to the intuitive English-sentence idea that "almost all" real numbers are irrational, as if it had any analogy whatsover to a statement such as e.g. "almost all" values of 1/x are defined.
Am I missing anything here? I've never heard of any natural practical application of this concept, in a way that suggests it's "true" in the same way algebra and calculus are considered to be.
"The idea of "differently sized infinities" has always felt like a mathematical game to me."
How do you feel about negative numbers?
There is absolutely no way you can have -3 oranges. There are metaphors, and we're all taught what you can do with a negative number, and there are all those lovely proofs of the properties of negative numbers, but you will never see -3 oranges sitting on someone's dining table.
(And I'm not even going to go into the reals---they're just ridiculous.)
The commonsense notion of "size" is one of those metaphors, but it's a good one: for non-infinite sets, the idea of a bijection and the idea that two sets have the same size match exactly. Consider three blocks, one labelled "1", another "2", and a third "3", and three oranges. Block "1" goes with that big, fat orange. Block "2" goes with the smaller orange. And Block "3" goes with the orange that's kind of pear-shaped.
Now, the extension of "size" to infinite sets is a little weird, mostly because infinite sets are very weird. As is the phrase "almost all"; it's not immediately obvious that, because there are more Rs than Ns, that there are a lot more Rs than Ns.
But it's true (in the sense that it's how math works), and you just have to get used to it the same way you got used to -3.
Reminds me of a joke:
A biologist, an engineer and a mathematician see two people enter a seemingly empty building, and a moment later, three people leave.
Biologist: They must have reproduced
Engineer: It must have been a measurement error
Mathematician: If one person enters the building, it will become empty
That's one of those metaphors plus what you have been taught about how negative numbers work. It's a product of Cartesian geometry (which had to be invented).
The difference is that negative numbers are 1) practically useful, and 2) extremely interrelated with other useful mathematical concepts. We build up an entire system of real (and even complex) algebra.
While with differently-sized infinite sets, the point is that they aren't (to the best of my knowledge) either 1) directly useful or 2) particularly interrelated with anything else. E.g. they're in addition to the set theory used for real and complex analysis -- not a foundation for it.
And so it seems more apt to put the notion of measuring different "sizes" for infinite sets in the same category of, say, quaternions and sedenions. I don't think too many people would say sedenions are "true", just that they're a construction with certain properties. They're not derived from anything, in the way that negative numbers are derived from inverting addition. Yet math textbooks and courses indeed attempt to present the "sizes" of infinite sets as "true" -- as real as negative numbers -- when I still fail to see how they're anything but a relatively arbitrary curiosity like sedenions.
The Mathematician blinks several times, looking at you as if you'd just grown a third elbow. :-O :-)
The problem is that there's nothing arbitrary about Cantor's work. Now, you could (along with Aristotle, IIRC) just declare "there are no actual infinities, only potential infinities" and thus that the whole investigation is verboten. But that's not very mathy.
Another alternative would be to redefine the fundamentals to outlaw actual infinite sets, but if you do that then you have redo everything, because some of the things that you might consider useful and related to other things go out the window. (Offhand, I think the result would be constructive mathematics. Putting on my computer scientist hat, I'm perfectly happy with constructive math. But most mathematicians aren't really.)
But if you allow infinite sets, then the properties of them, including infinite sets of different sizes, are derived directly from basic set theory. They are as "true" as anything else in math. (I'm a formalist---we're just playing a game that has a specific set of rules. :-))
"Sedenion neural networks provide a means of efficient and compact expression in machine learning applications and were used in solving multiple time series forecasting problems."
"Metacognitive Sedenion-Valued Neural Network and its Learning Algorithm" (https://ieeexplore.ieee.org/document/9160921/).
> While with differently-sized infinite sets, the point is that they aren't (to the best of my knowledge) either 1) directly useful or 2) particularly interrelated with anything else.
The distinction between countable and uncountable is very practically useful. You can always encode the elements of a countable set such that every element is represented in the computer by a finite-length bit string, but you can never design an encoding that does this for every member of an uncountable set. No exceptions.
In addition, this notion of distinguishing countability from uncountability can be applied to show that the set of Turing machines is countable (up to isomorphism). This immediately informs one of the limits of computation, and why, e.g. trying to design a computer or program that performs exact arithmetic operations on real numbers in general, or say finding the limit of a series in real numbers in general, is as achievable as squaring the circle.
And make no mistake, computational modeling that today already permeates every aspect of society would look very different if exact operations on real numbers were computable and efficient. (and for the general programmer, no need to worry about the imprecision of floats and especially with financial stuff)
> While with infinite sets, the point is that they aren't (to the best of my knowledge) either 1) directly useful or 2) particularly interrelated with anything else.
Formal treatments of calculus are based around infinitesimals. Infinitesimals are the same set of ideas about infinite sets, but applied to the spaces between arbitrarily close points.
Specifically formal treatments of calculus work because between any two points there are an infinite number of points and the size of that infinity is the same size as the entire set of real numbers.
At least this is my undergrad level of comprehension. I'm certain there are more exotic treatments, but this is the one that you have to do all the proofs for.
You're absolutely right. I don't know what I was thinking when I wrote this last night.
I meant to say that formal treatments are based around being able to find arbitrarily small neighborhoods around a point, and that this only works when you have something the size of the reals, for which you can find a real-sized space between any two points.
It's this infinite self-similarity at any arbitrarily small scale that is at the heart of calculus.
The problem here is that you're basing your conception of what is "good," "interesting," and "useful" based off of what you might discuss in an undergraduate class in Real or Complex Analysis.
The notion of cardinality and how we can have different "sizes of infinity" is an enormously important and interesting idea that comes up all the time in mathematics.
How can you say that infinities of different sizes are not useful? All of analysis is built on the fact that R is the unique complete ordered field, and therefore uncountable.
And if you want every infinite set to have size continuum, then N (and so Z and Q) break horribly
The field of constructive analysis would beg to differ with you.
An amazing amount of real analysis can be done with absolutely no reference to the excluded middle, completeness axiom, or accepting the existence of uncountable sets.
Note: although the phrase “almost all” sounds like an intuitive English-sentence idea, it actually has a very precise mathematical meaning: it means the complement has Lebesgue measure zero.
It sounds like maybe you take issue with this phrase. I kind of think about it probabilistically: if you start uniformly sampling decimal digits 0-9 at random d1,d2,d3,… then the corresponding number
0.d1d2d3…
(with the digits going on forever) will be irrational with probability 1.
I think the problem is that infinity doesn't exist in real life. We don't have access to unbounded quantities, so any discussion of them is necessarily going to be more abstract than the notions we're used to dealing with.
The results may be unintuitive, but they follow from very intuitive ways of doing counting. Two piles of rocks are the same size if you can line them up so each rock from one pile matches with a rock from the other pile. But once you combine that with mathematical induction, you start getting into the realm of objects that don't exist in real life and it starts looking weird.
I think there's an easy way to draw an analogy between the statements that "almost all" real numbers are irrational and "almost all" values of 1/x are defined. If you were to pick a random real number from the domain of 1/x, the chances of picking one for which 1/x is undefined approaches zero. Similarly, if you pick a random real number, the chances of picking a rational one approaches zero.
> We don't have access to unbounded quantities, so any discussion of them is necessarily going to be more abstract than the notions we're used to dealing with.
I've always thought of infinite as very concrete unbounded processes, rather than quantities (but then, I'm a computer scientist, not a mathematician).
You cannot count the size of an infinite set because you can never stop counting; the process goes on and on. So the way to compare infinities is to map things between them.
If you can map all elements from the first infinite to the second, but not the other way around, the second is larger than the first. It makes sense as to define this process as the way to compare sizes, since actually counting all their items is ruled out.
"Almost all" elements of an infinite set could be defined similarly, if for each item without a particular property you can create a limitless variety of different items with it.
> If you can map all elements from the first infinite to the second, but not the other way around, the second is larger than the first.
This seems simple enough but it's still very much counter intuitive when comparing say all the natural numbers with only the even numbers. An intuitive mapping would lead you to think there are more naturals than evens, but math says these two infinites are of the same size.
> When we make statements such as the size of the set of all natural numbers 1, 2, 3... is the same as the size of the set of all natural even numbers 2, 4, 6..., despite the former containing the latter but not vice-versa... it seems the word "size" -- and associated terminology "larger than", "smaller than", etc. -- is a particularly unhelpful set of words to have chosen for this.
It seems to me, when you're counting things, you wouldn't care what are the things you're counting specifically; while in your example it does matter for determining the subset relation. Whatever way of counting where it matters would be kind of weird.
> it seems like an unwarranted leap to go from this formal comparison of cardinality of infinite sets, to the intuitive English-sentence idea that "almost all" real numbers are irrational
But the article uses "almost all" in the formal sense? Which, by the way, also has pretty intuitive meaning, in my opinion.
> But the article uses "almost all" in the formal sense? Which, by the way, also has pretty intuitive meaning, in my opinion.
By formal sense, do you mean everywhere but a finite-measure set? Zero-measure?
I'll use finite-measure because it allows for intuitive constructions like "almost all real numbers are outside the closed unit interval 0 <= x <= 1".
But, there are still some constructions that a layperson might expect to hold, like: "almost all real numbers have fractional part < 1e-100", or "almost all positive numbers are of the form x.y with 0.y < 1/x" (thanks, harmonic series).
I think that without formal training, we're especially bad at reasoning about dense sets such as the set of rational numbers, compared to, say, the reals.
> ... intuitive constructions like "almost all real numbers are outside the closed unit interval 0 <= x <= 1".
But that would be wrong wouldn't it? I can produce all real numbers by pairing a finite number of (in this case 3) real numbers outside the set with each real number inside the set.
For each real x, in 0 <= x <= 1, we also have:
1/x (covers all real x, 1 <= x <= +infinity
-x (covers all real x, -1 <= x <= 0
-1/x (covers all real x, -infinity <= x <= -1)
The cardinality of all those reals outside of 0 <= x <= 1 is therefore 3x the cardinality of those inside 0 <= x <= 1, in this construction. But for infinite cardinalities the 3 can be discarded.
So there are exactly as many real numbers in 0 <= x <= -1 as outside it.
I don't disagree that these sets are the same cardinality. But cardinality isn't the only way to describe the "size" of a set.
I suppose the typical measure-theoretic definition of "almost all" / "almost everywhere" insists on "everywhere but a zero-measure set", and you can't define a measure that satisfies sigma-additivity that treats intervals of finite Lebesgue measure as such, while ascribing nonzero measure to sets of infinite measure.
But even so, the Lebesgue measure of R is infinite, while the same measure of the unit interval is 1.
If I do it the "normal" way (repeating the [0, 1] interval infinitely many times), there are infinite times as many reals outside the [0, 1] interval as there are in it.
But that "infinite times" is a countable infinity - the number of integers. How does "the number of reals in [0, 1] times the number of integers" compare to "the number of reals in [0, 1]"? Are they the "same" infinity?
What if we use rationals instead of reals? We can do the same x 3 thing, right? But the number of rationals is countably infinite, and "3 times countably infinite" is the same as "countably infinite times countably infinite", isn't it?
Maybe not the most formal of meanings, but my favorite is a probabilistic one: given a random element, how likely it is that it satisfies a predicate? If some elements don't, but it's still satisfied with probability 1, that's pretty clearly almost always.
EDIT: yeah you guys are right, I wouldn't worry too much about the prior not being a proper distribution, but still - this doesn't seem related to the cardinality of sets in a simple way after all!
If you are referring to a measure then 'almost all' pretty much exclusively means every except for a zero-measure set. I've never encountered another definition when dealing with measures.
But consider a different, entirely valid context of "almost all":
"Almost all natural numbers are greater than 10."
"Almost all prime numbers are odd."
If we wanted to extend this intuitively, we might want to support the statement that "almost all positive reals are greater than 10". One option of doing that is by using the nonstandard definition of "everywhere but on a finite-measure set".
Is the meaning unintuitive, or is it in fact simply that the world doesn't match your intuitions?
'cos I mean, I'm not a betting man, but "I bet mathematical facts are correct despite being unintuitive" makes my (winning, profitable) bet in December 2020 that Trump lost look like a true gamble by comparison.
> But it seems like an unwarranted leap to go from this formal comparison of cardinality of infinite sets, to the intuitive English-sentence idea that "almost all" real numbers are irrational, as if it had any analogy whatsover to a statement such as e.g. "almost all" values of 1/x are defined.
What's the objection? Almost all values of 1/x are defined. The analogy is... it's the same usage with the same meaning. That's not even an analogy.
Almost none of the numbers of the form 1/x have definitions. Definitions have finite length, so there is only a countable number of numbers with definitions. But there are uncountable number of numbers of the form 1/x.
For "nothing to do with cardinality", the Cantor set has the same cardinality as the reals, and yet has measure zero.
For arbitrarily small size, let's start with an enumeration of the rationals. Now pick ε > 0. Let's put an open interval of size ε/2 around the first rational, ε/4 around the second, ε/8 around the third and so on. The union of those intervals is an open set of length bounded above by ε/2 + ε/4 + ε/8 + ... = ε. (Note, it is actually smaller than this because some of the intervals overlap...)
That constructs an open set, which includes every rational, of size as small as we like.
So "cardinality" and "the measure of a set" have no particular relationship, other than that the measure of a countable set is always 0.
A statement is true of 'almost all' x in the set X if, when sampling from X, the probability that the statement will be true of the value you sample is exactly 1.
If you think there's a conflict between "arbitrarily small size" and "nothing to do with cardinality", there isn't; this is a different kind of size.
(OK, there is a relationship to cardinality, but two sets of the same cardinality can be different sizes by this metric, and two sets of the same size can have different cardinalities.)
Because yes, almost all values of 1/x are defined -- for every real value of x except for one, the number zero.
While saying "almost all" real numbers are irrational... is saying that real values are all irrational, except for... the entire infinitely large set of rational numbers.
The idea of "almost all" and "except for an infinite number of items" is not even close to the same commonsense meaning, not even remotely.
That's my whole point -- that meanings are being mixed up, that this way of classifying infinite sets shouldn't be conceptualized as "size" at all because the analogies break down, well, instantly.
> While saying "almost all" real numbers are irrational... is saying that real values are all irrational, except for... the entire infinitely large set of rational numbers.
You're claiming that the precise mathematical definitions of these terms are ridiculous, and then just repeating those mathematical definitions of the terms to support your claim.
> That's my whole point -- that meanings are being mixed up, that this way of classifying infinite sets shouldn't be conceptualized as "size" at all because the analogies break down, well, instantly.
The whole point is that when you're comparing infinite sets, common sense terms like "size," "almost all," etc. don't make any sense at all, so mathematicians create precise definitions for the terms that are then used among all mathematicians and do make sense. These terms are common among mathematicians and they do make sense.
The fact that these definitions don't precisely match the definitions used by non-mathematicians is precisely the goal, because those definitions make no sense whatsoever when referring to infinite sets. It's okay to complain about using jargon in discussions not related to mathematics, although that's not what is happening here (and it's difficult to imagine a conversation about infinite sets that is not also about mathematics). This complaint is comparable to complaining that the definitions of "sharp" and "flat" in music theory don't at all match the definitions of "sharp" and "flat" used in conversations not about music.
I suppose another way to approach this is to ask what you would propose the term "almost all" to mean when referring to an infinite set? For that matter, what do you propose that the term "almost all" means to a layperson referring to small finite sets? What percentage do you need in order to use "almost all"?
Mathematics is a language unto itself, and some concepts are more difficult to map to natural language than others. Even if you had a much broader English vocabulary, with unique words assigned to every currently known mathematical abstraction, the intuitive sense in which we speak and read and understand will clash with the behavior and features of mathematical symbols.
Natural language maps onto an embodied experience designed by evolution. Math transcends what our brains are built to expect and predict. Infinities, exponentials, quantum entanglement, and an infinite array of other very real things simply exist outside what we are capable of understanding intuitively, without some special conjunction of neural wiring, training, or happenstance.
The nature of infinities might seem like navel gazing, but it helps us understand the features of different types of numbers, and those are the building blocks of proofs. Knowing to which infinity class a number belongs might help inform the optimization of real world engineering of chips or data storage, or drug design algorithms, and so on. It could eventually be part of figuring out practical quantum computers, solving p vs np, or maybe just a piece of a better factorization algorithm that improves cryptography.
I could also mathematically say that "almost all" integers are not powers of 58279. I think that would fit the commonsense meaning, even though there are infinite counterexamples.
> The work of Cantor and beyond regarding infinite sets is generally taught as mathematical "fact", as fundamental as 2+2=4, but unlike 2+2=4 is there any practical constraints or applications around it?
Yes. There exist libraries that allow us to do arithmetic with arbitrarily large integers with exact precision in a computer, but none that allow us to do arithmetic with arbitrary real numbers. This is because you cannot ever find an encoding that assigns each element of an uncountable set a finite-length string, but you always can find one such for countable sets, no exceptions.
So in practical day-to-day, the programmer (and computational modeller) has to settle for all the quirks of imprecision when dealing with floats, and this imprecision makes a lot of algorithms more complex and limited than the mathematical computation they are trying to model.
> that allow us to do arithmetic with arbitrary real numbers
Of course there are. You can symbolically encode and manipulate real numbers with exact precision. Sure it might become unwieldy and is fundamentaly constrained by the physical limits of your computer but the same is true with large integers computation.
People don't do it usually because it's useless not because it's impossible. Bounded precision is fine most of the time.
That doesn’t let you work with arbitrary real numbers because the vast majority of real numbers cannot by symbolicly encoded, even on a turing machine with infinite memory
This seems like just an observation about the less formal nature of colloquial language. We also say things like "70% of people do X", when in actuality the underlying study "extrapolates" from a smaller population via statistical analysis.
All these statements are attempting to do is summarize a more complex/nuanced statement in terms that a lay person can relate to. It could also be that people are deliberately overloading the meaning of the words "size" etc, which is one well known mechanism that cause mutations in languages over time.
>We also say things like "70% of people do X", when in actuality the underlying study "extrapolates" from a smaller population via statistical analysis.
Isn't that approximation because we will never really know how many people exactly do X.
>All these statements are attempting to do is summarize a more complex/nuanced statement in terms that a lay person can relate to. It could also be that people are deliberately overloading the meaning of the words "size" etc, which is one well known mechanism that cause mutations in languages over time.
Are they overloading the meaning of "size" or offloading it? Because again nobody really knows what real size or cardinality of anything is so they offload it to some other future term or meaning just like you said "which is one well known mechanism that cause mutations in languages over time."
I'm not a mathematician but I heard from them say that it is better to use word unbounded than infinite because infinities are tricky.
> is there any practical constraints or applications around it?
To speak to this question: Suppose you were going to build a calculator program on your computer. You might start with integers and have addition multiplication and subtraction. All perfectly fine. Even if the numbers are very big you can still represent them by using multiple memory slots.
The you add division, and since you don’t want rounding errors in your calculator, you add rational numbers. Can rational numbers still be represented by a computer? Yes, because they are a countable infinity, you can represent them just as easily as you can represent integers.
You would like your calculator to be as complete as possible so you keep adding functions like roots, exp, log, sin, et cetera. But no matter how many functions you add, you’ll never be able to represent every real number. This is useful to know so that nobody ever tries to build a computer that does this.
> just kinda bugs me that the way the formal cardinality of infinite sets is communicated is as the commonsense notion of "size".
It's not in any place doing maths sensibly.
It's just that both the USA and the UK have a terribly unrigorous way of teaching maths even at university level.
If you take a look at the French Wikipedia article on cardinality, you will see that the world size is never used and the closest thing you will find to it is that cardinality can intuitively be seen as the "number" of elements in a set with actual quotes around. The article then compare multiple very rigorous definitions. I expect a translation of it would be unreadable by most American however. The average French reader will have been exposed to significantly more formal mathematics.
Using the world "size" when talking about infinities is just sloppy.
Actually the proofs depend on philosophical assumptions that both can and have been questioned. The conclusions likewise.
If you go down that path, then "uncountable" can mean something closer to, "a self-referential tangle is involved" than it does to "more". For example you can't enumerate the reals. But there is a countable list that DOES include every possible real - you just can't always figure out whether things on that list are reals!
> But there is a countable list that DOES include every possible real - you just can't always figure out whether things on that list are reals!
A (hopefully) helpful way to look at it: Cantor's diagonalization argument doesn't work if every function from natural numbers to reals, that includes all reals, is partial. Which, per the halting problem[0], is exactly the behaviour you get if your correspondance function involves doing arbitrary computations decoded from the input natural.
0: The class of propositions "Turing machine #N halts.", in addition to true and false propositions, also contains infinitely many propositions that are neither true nor false[1], so compacting out the non-halting naturals doesn't help.
1: aka infinitely many counterexamples to the axiom of excluded middle
One of the standard constructions of the reals is via sequences of rationals. A sequence of rationals that looks like it should be converging is called a Cauchy sequence. (I'm avoiding the technical definitions, but they are easy to find.) Two Cauchy sequences that look like they are converging to the same thing, are equivalent. And reals are defined as equivalence classes of Cauchy sequences.
This translates pretty well to a constructive approach. For example we can build our mathematics out of things expressible in a programming language. We can define a Cauchy sequence as a function that can be proven by our favorite axiom system to produce a sequence of rationals converging at a specified rate. Again, two functions are equivalent if they can be proven to produce sequences converging to the same thing. We can certainly enumerate all possible programs. But we cannot, thanks to the Halting problem, write a program that is able to select out which possible programs represent reals. Nor can we reliably identify which pairs of programs are equivalent.
So in this construction there is really no actual set of reals that can be identified. Nor can we tell whether a real has been listed already. But there is a countable list that has all possible things that might possibly represent a real. Which will include each real many times.
Does that clarify what I meant by "self-referential tangle"?
From within Formalism, which for all intents and purposes won, your characterization is correct. We have the standard reals. And we've constructed a proper subset of the reals.
From within Constructivism, the "standard reals" is a piece of sophistry. It is ridiculous to claim the existence inconceivable infinite swarms of non-existent things whose only claim to "existence" is the sheer multitude of numbers that can never be named or constructed. And therefore the "reals" that I described are a sensible thing to call reals, all of whom have an existence that can be established on reasonable grounds.
From within either philosophy, the other doesn't make much sense. But, in fact, both philosophies are internally consistent, and no logical argument can ever establish one over the other. (In fact, Formalism won because it is more convenient. And for no other reason.)
> you may have to put aside some preconceptions to get the point.
"Preconceptions" is a rather unhelpful way to describe the actual situation here, which is you using a different definition for real numbers, that is not even remotely equivalent, without making that clear from the outset.
You said:
> But there is a countable list that DOES include every possible real - you just can't always figure out whether things on that list are reals!
Which is pretty clearly incorrect using the usual definition of real numbers.
The fact that you don't like how real numbers is usually defined is not sufficient justification for you to start confusing a discussion by mixing in your alternative definition without making an explicit distinction.
You've defined a set that can certainly be the subject of interesting investigation, even in a context where the usual definitions about real numbers and uncountable infinities are still accepted. Choosing to inject a naming conflict is counterproductive and suggests you're more concerned with making smug claims about being able to do things mainstream mathematicians consider impossible, rather than having a productive discussion about how to construct most of familiar mathematics without allowing uncountable infinities.
The name "real numbers" predates the first formal definitions.
The philosophical debate that I pointed to predates the general acceptance of the standardization of the modern definition of the reals.
Now standard definitions literally makes no sense within constructivism. You talk about constructing familiar mathematics, but are using non-constructions that depend on questionable and questioned notions of absolute truth.
What I described is as close to standard mathematics as you can come within a constructivist framework. The tradition of calling such constructions "the real numbers" may be new to you, but is actually over a century old.
Here, terms like "almost all" come to us from measure theory (and its application to probability theory), which tries to define a general notion of the size or volume of sets.
It's because they kinda follow the same rules as finitely sized sets. Say you have two sets: A and B. A has five elements and B has seven. That means that for each element in A there is a unique corresponding element in B, but the reverse is not true: B has elements that do not map to elements in A. OK, now say A is the set of integers, and B is the set of points on the x-axis. Same deal applies, you can map every integer to a point on the x-axis and still have points on the x-axis that do not map to integers. So we've formalized infinite sets in a way that preserves our intuitive notion of what it means to have more or less of something, without having to count or measure the sets which would be bloody impossible! Yes, it's a game but so is all of math!
As for "almost all", in statistics we have this notion of "almost surely", which is formally defined as "with probability 1". It turns out that a probability of 1 doesn't guarantee that something will happen, because 1 less an infinitesimal is still 1! So roll a spherical die that yields a real number between 0 and 1; if you roll a 0, you lose. The probability that you will win is still 1, but you might still roll that goose egg! So we say that you will "almost surely" win at this game.
If you think of rational and real numbers in terms of our two sets with different cardinalities, not only is there a real number for every rational number (because the rationals are a subset of the reals), but you can map an infinite number of real numbers to each rational number! Take your pencil and put a dot on a random point on the x-axis; you might hit a point whose coordinate is a rational number, but your chances of doing so are infinitesimal! Therefore, in a formal probabilistic sense, you will "almost surely" choose a point with an irrational coordinate. Therefore, we say that "almost all" the real numbers are irrational.
> Take your pencil and put a dot on a random point on the x-axis; you might hit a point whose coordinate is a rational number, but your chances of doing so are infinitesimal!
But see, that's precisely the kind of intution I'm arguing against -- the kind of "analogy" that seems indefensible to me.
After all, I can select any pixel from an infinitely zoomable number line on my computer and it will always be rational, every time. Or any measurement you take of a pencil on paper will always be bounded by two rational numbers measured by counting off ticks on a ruler, and unknown within that. One could argue it's impossible to even define what it means to point with a pencil to an irrational number on a straight line. What does it even mean to select a value randomly from multiple infinite sets? And if you define that in some particular way, how do you justify that a pencil could ever do that on paper?
I understand perfectly everything you describe about mapping -- I'm familiar with the math. It just seems misguided and potentially dangerous to me to draw any practical comparisons to it, such as your pencil-and-paper one, because they seem to break down instantly.
> The work of Cantor and beyond regarding infinite sets is generally taught as mathematical "fact", as fundamental as 2+2=4, but unlike 2+2=4 is there any practical constraints or applications around it? It seems more like an almost arbitrary puzzle than "fact".
The field of numerical analysis depends on formal treatments of calculus. Formal treatments of calculus don't work in sets smaller than the reals.
Dedekind cuts (which are necessary for epsilon-delta proofs) or alternatively Bolzano Weierstrass theorem (if you use sequential convergence definition) imply multiple sizes of infinity, so the logical arrow is in the other direction, roughly: "real analysis implies assigning different sizes to sets", not "assigning different sizes to sets implies real analysis".
It's been a while but iirc there are other constructions (such as p-adic numbers) which create multiple sizes of infinities out of the rationals which aren't real analysis.
The Surreal Numbers are a particularly intuitive sort that create different sizes of infinities. All the ordinals, really. They're isomorphic (in NBG) to a maximal-class hyperreal field of nonstandard analysis.
It should be noted that “almost all” is a probability term, meaning that (under a uniform probability measure on the 0-1 interval), the probability that a randomly chosen number is rational is 0. The probability that that a randomly chosen number is irrational is 1.
Totally agreed, as with 2+2=4. It's an interesting topic in philosophy that we often gloss over because the naturals are exceptionally predictive of so many things in the real world.
> cardinals aren't necessarily commonsense
Maybe. The two notions being applied are (1) that if you just rename everything in your set you haven't changed it's size, and (2) that if all renaming attempts necessarily leave some elements out then the set containing those elements must be bigger. Any "size" capturing those two ideas is equivalent to a renaming of the cardinals.
> naturals vs even naturals
Any topological sort of the infinite subsets of the naturals preserving the set inclusion partial order is going to have some weird oddities. In particular, you'll have a lot of incomparable sets (like the evens and odds, or like all the vertex-deleted N-{x} sets) which are artificially bigger/smaller than each other or artificially the same size, despite the order superficially looking like one based on set inclusion.
> other more commonsense alternatives to cardinals
Even just looking at infinite subsets of the naturals, any ordering you choose must necessarily have the property that there exist infinitely many sizes which each either do not have a next bigger element or do not have a next smaller element. That's a decidedly weird property that doesn't manifest for finite sizes, and it's entirely unavoidable. I'm potentially okay with "size" not being used as the name, but as a counterpoint the motivation is to extend the notion of "size" for finite sets as best as possible.
> cardinals aren't useful
They're often used to prove things that you do use directly. Sometimes those proofs can get convoluted, but a surprisingly effective technique in many domains is just arguing that two sets have different sizes so they can't be the same (or can't map to each other in the desired way or whatever). Cardinalities agree with our intuition about size on finite sets well enough to directly extend to those sorts of proofs. As one example, the math leading up to the fixed point theorems proving the optimal solution to a GAN yields the desired probability distribution is usually done (and originally done iirc) via infinite counting arguments.
I'll be the first to admit that my day-to-day as a programmer doesn't often do much with infinite cardinalities, but given that the mathematical world we've constructed for ourselves happens to often line up nicely with the real world, "mathematical games" that leverage existing intuition to expand our knowledge of that mathematical world still seem valuable.
Goodstein's theorem comes with a function on the natural numbers. Whether we can prove that the function works correctly, and indeed whether we can implement the function, depends on how big we allow infinities to be. https://en.wikipedia.org/wiki/Goodstein%27s_theorem#Sequence...
We can implement the function on a Turing machine. Whether we can prove that the function winds up being well-defined depends on which axioms we use. But if you allow transfinite induction up to ε0 (see https://en.wikipedia.org/wiki/Epsilon_numbers_(mathematics)) we can prove that the function works correctly. And this statement can be made without any reference to the size of any uncountable infinities. (Indeed the argument can even be made constructively, within mathematical systems where everything is countable.)
Proof-theoretic ordinal ε₀ isn't sufficient to prove Goodstein's theorem; the entire point is that PA has ε₀ as proof-theoretic ordinal and yet is not able to prove it.
The better point to make in the context of the parent comment is that ω is the first of many transfinite numbers; our ability to talk about multiple numbers "above infinity" is intimately related to the set theory which underlies Cantor's theorems. Infinities aren't just about some sort of mathematical game, but directly influence what we can define and describe.
The smallest epsilon number ε0 appears in many induction proofs, because for many purposes, transfinite induction is only required up to ε0 (as in Gentzen's consistency proof and the proof of Goodstein's theorem).
As for the rest of your comment, I'm able to talk classical mathematics but my sympathies are firmly Constructivist. So yes, I really do see most discussion of infinities as part of an explicitly meaningless mathematical game known as Formalism.
You really don't want to try a Wikipedia slap-fight with me. From my original link, at the very top of the page, in the first paragraph:
> In mathematical logic, Goodstein's theorem is a statement about the natural numbers, proved by Reuben Goodstein in 1944, which states that every Goodstein sequence eventually terminates at 0. Kirby and Paris[1] showed that it is unprovable in Peano arithmetic (but it can be proven in stronger systems, such as second-order arithmetic).
Despite PA having such a big proof-theoretic ordinal, PA cannot prove Goodstein's theorem. We need SOL.
Also, as one constructivist to another: Nobody cares la~ Hopefully you know the difference between PA, which describes NNOs, and HOL, which is ambient in each topos. Just because some topoi have NNO (just because HOL can host PA) and topoi recognize Goodstein's theorem (because Goodstein's provable in HOL) doesn't imply that all NNOs can witness Goodstein.
Indeed, double-check your understanding with the following quirk: In the topos Diff for synthetic differential geometry, the natural numbers are decideable and countable, but the real numbers are not decidable (and in fact prove LEM false!) and uncountable. Due to smoothness requirements, the real numbers are fundamentally different from the natural numbers in Diff. These are two different objects, two different infinities, with two different topologies.
The work of Cantor and beyond regarding infinite sets is generally taught as mathematical "fact", as fundamental as 2+2=4, but unlike 2+2=4 is there any practical constraints or applications around it? It seems more like an almost arbitrary puzzle than "fact".
I don't disagree with any of the proofs or reasoning... it just kinda bugs me that the way the formal cardinality of infinite sets is communicated is as the commonsense notion of "size".
When we make statements such as the size of the set of all natural numbers 1, 2, 3... is the same as the size of the set of all natural even numbers 2, 4, 6..., despite the former containing the latter but not vice-versa... it seems the word "size" -- and associated terminology "larger than", "smaller than", etc. -- is a particularly unhelpful set of words to have chosen for this.
The reason I think this is important is because if you go to e.g. the Wikipedia page for "irrational number", it states in the introduction [1]:
> "As a consequence of Cantor's proof that the real numbers are uncountable and the rationals countable, it follows that almost all real numbers are irrational."
But it seems like an unwarranted leap to go from this formal comparison of cardinality of infinite sets, to the intuitive English-sentence idea that "almost all" real numbers are irrational, as if it had any analogy whatsover to a statement such as e.g. "almost all" values of 1/x are defined.
Am I missing anything here? I've never heard of any natural practical application of this concept, in a way that suggests it's "true" in the same way algebra and calculus are considered to be.
[1] https://en.wikipedia.org/wiki/Irrational_number