Hacker News new | past | comments | ask | show | jobs | submit login

Technically, 0^0 is an indeterminate form and has no specific solution. Accurate but unhelpful.

Practically, 0^0 highlights the issue that most of us don't have a good conceptual model for what exponents really do. How would you explain to a 10-year old why 3^0 = 1 beyond "it's necessary to make the algebra of powers work out".

I use an "expand-o-tron" analogy

http://betterexplained.com/articles/understanding-exponents-...

to wrap my head around what exponents are really doing: some amount of growth (base) for some amount of time (power). This gives you a "multiplier effect". So, 3^0 means "3x growth for 0 seconds" which, being 0 seconds, changes nothing -- the multiplier is 1. "0x growth for 0 seconds" is also 1, since it was never applied. "0x growth for .00001 seconds" is 0, since a miniscule amount of obliteration still obliterates you.

This can even be extended to understand, intuitively, why i^i is a real number (http://betterexplained.com/articles/intuitive-understanding-...).




I successfully managed to explain 3^0 to 10 year olds (as a recovering high school math teacher) as:

  3^2 = 9
  3^1 = 3  (divide 9 by 3)
  3^0 = 1  (divide 3 by 3)
  3^-1 = 1/3  (divide 1 by 3)
  etc
This can logically be explained as n^0=1 for all real numbers.

Unfortunately this doesn't really handle 0^0 but fortunately 10 year olds are rarely that difficult.


This explanation works pretty well on adults: Just include the multiplicative identity (1) in the expansion.

  3^3 = 3*3*3*1 = 27
  3^2 = 3*3*1   = 9
  3^1 = 3*1     = 3
  3^0 = 1       = 1
and likewise:

  0^3 = 0*0*0*1 = 0
  0^2 = 0*0*1   = 0
  0^1 = 0*1     = 0
  0^0 = 1       = 1
I haven't yet tried this on an actual 10 year old, though.


Also good, but the last line doesn't work (in my opinion): to get from 0x1 to 1 you need to undo the x0, i.e. dividing by 0. Which is undefined ... Zero, confusing 10 year olds for centuries!


True. If you approach it from a standpoint of extrapolation from known values, you have a division by zero one way or the other.

What I'd intended was that the value of 1 was reached by applying the same algorithm that was applied to arrive at the other values: start with 1, multiply by the base once per instance of the exponent. No division involved.

It's still incorrect if we want to be strict, of course. That algorithm is not quite the definition of exponentiation, because that algorithm can't really be extended to work outside rational exponents. Exponentiation is defined across complex numbers (ignoring 0^0 for the moment). I think this is acceptable because I'm only shooting for an explanation, which doesn't need to be strict.


That's what the parent poster meant with "it's necessary to make the algebra of powers work out".


Thanks, I was gonna say the same thing. :)


What is "indeterminate form"? What does it mean for expression to "have a specific solution"?

You see, 0^0 = 1, and it's obvious to a mathematician. The only problem is that the function f: [0, \infty) x R -> R, f(x, y) = x^y is discontinuous in (0, 0) and that's what causes problems -- for instance, this is the source of the whole "indeterminate form" notion. If a function f is continuous in (a, b), then for every two sequences a_n, b_n, such that lim a_n = a, lim b_n = b, we have lim f(a_n, b_n) = f(a, b). That's why lim (a_n)^(b_n) = a^b if (a, b) != (0, 0), and this is "determinate form". But if (a, b) = (0, 0), then no matter how we define 0^0, it does not follow that lim (a^n)^(b^n) = a^b = 0^0, because in this case, lim (a_n)^(b_n) can be every positive value, and so mathematicians used to call it "indeterminate form" (it's not common today, though). So, since this problem is unsolvable in a consistent (continuous) way, we define 0^0 = 1, to be consistent with exponentiation rules, at least.

I've never seen a need for an "intuitive" explanation of exponentiation -- the usual definition is as intuitive as one can get. The thing is, most people do not know, _why_ expressions like pi^e are supposed to make sense -- they just take exponentiation as given. Only then they need to make up some explanation why "exponentiation rules" are like this, and what exponentiation is about. Hell, people don't even know what real numbers are! How are they supposed to make sense of exponentiation with exponent other than natural number?


Mathematicians don't argue about what an expression "really is" (or at least, real mathematics doesn't involve this). They define functions and use axioms to prove theories about them.

"No really". Mathematics just isn't concerned with this stuff. Sometimes infinity it defined as single point making the real number compact, sometimes a "positive infinity" and a "negative infinity" are defined. Sometimes you add points to a given function to make it more tractable and sometimes you don't. But none of this "means" anything. The real number line can be embedded in a number of topological spaces. At least two division rings and various things (the complex numbers are most common). The way you extend a given function (say e^x) is going to vary depending on what space you're looking at as well as what topic you're interested in.

Math works with definition systems and get theorems out of them. If you want to know what something "really is", consult philosophy or something.


This.

Math is a tool (and sometimes abused for pure pleasure, 200 years later applied to make hard crypto work). If your definition doesn't make sense for the application, fix your definition and get over it.

Another example I've recently often bitched about in discussions is modern measure theory and its application to probability calculations. People just don't get the concept of theorytically possible event, but probability 0, i.e. ignore this. But without Lebesgue integration L_p function spaces are not complete and an awful lot of stuff stops to work properly. Among them essentially all of modern physics.

The sane approach is to get over the "this doesn't make intuitive sense" bitchering and just use defintions to derive useful results. And after a few years of playing around with stuff and applying the un-intuitive definition, it's becoming intuitive ;-)


Interesting that you say that. I've skimmed but have been meaning to properly read Nelson's: Radically Elementary probability theory http://www.math.princeton.edu/~nelson/books/rept.pdf and http://www.stat.umn.edu/geyer/nsa/. They do away with that problem all together as well as infinite constructions (replaced by hyperfinite) by replacing measure theory with non standard analysis. The gain is at least increased intuitiveness.


Are you sure you're replying to right post? I ask, because nothing I see in yours can be seen as reply to mine -- it actually repeats my point.


joe's reply clarified things somewhat - at least for me. Are replies always supposed to be rebuttals?


Are replies always supposed to be rebuttals?

It turns out to be so. If a reply is not a rebuttal, it is usually preceded with something like "To clarify, ..." or "I wanted to add, that ...". I just got confused without it.


I felt the need to state what I thought was the case. Maybe originally it was a rebuttal but you don't disagree, feel free to take it as a clarification.


Anthropomorphizing math as having a "concern" is also fuzzy thinking.

>If you want to know what something "really is", consult philosophy or something.

You mean like the "foundations of mathematics"? http://en.wikipedia.org/wiki/Foundations_of_mathematics


What is "indeterminate form"?

http://en.wikipedia.org/wiki/Indeterminate_form

You see, 0^0 = 1, and it's obvious to a mathematician . . . we define 0^0 = 1, to be consistent with exponentiation rules

Well, you're going to be inconsistent with them no matter how you define it, since, as you point out, x^y should be zero if you approach (0,0) along the x=0 axis, and it should be one if you approach along the y=0 axis.

0^0 is simply an expression that doesn't make sense. There isn't an answer, and there certainly isn't something we could agree to define it as. It is gibberish, nothing more, nothing less. One cannot assume just because there are mathematical symbols on paper that they make sense.


By "exponentiation rules" I mean algebraic equalities, like a^x * a^y = a^(x+y). Most of them work no matter if you define 0^0 = 1 or 0, but some of them are cleaner with 0^0 = 1. It's also consistent with cardinal and ordinal exponentiation (look it up). "Approaching along x axis" is not algebraic notion, it's analytic one.

0^0 makes no less sense than, say, -e^(i pi). They're both 1 because we define them like this. If you think that -e^(i pi) makes more sense than 0^0, please, explain me why.

Also, mathematicians agree in this, seriously. Go and ask one.


> Also, mathematicians agree in this, seriously. Go and ask one.

Mathematician here; we do not. See http://math.stackexchange.com/questions/11150/zero-to-zero-p....

More precisely, as Arturo Magidin points out at http://math.stackexchange.com/questions/11150/zero-to-zero-p..., if we view exponentiation in the 'discrete setting', then $0^0$ must be $1$; whereas, if we view it in the continuous setting, there is simply no good answer—unlike $e^{i\pi}$, which also lives in the continuous setting, but has a perfectly good, unambiguous answer. (lotharbot gives a nice explanation below of the ways that this is consistent with existing mathematics; but it can also be derived from the definition of the exponential function, with no further arbitrary conventions needed.)


I agree with you and lotharbot that if you view things from a continuous perspective, 0^0 is indeterminate.

Thinking back on my math education, part of the difference in viewpoint may be the first time I rigorously met the continuous-domain exponential.

This was in real analysis. Exponentiation is defined first for positive integer exponents, and then for rational exponents. All elementary. Then it's extended to real-valued exponents by taking the limits of rational numbers, and appealing to continuity.

I just looked, this is exercise 6 in chapter 1 of baby Rudin.

So, because the notion of limit and continuity is embedded in this definition of the exponential function, it's natural to "approach" (groan) 0^0 as a special case, because the conditions of this definition (continuity) don't hold.


But my point is, if 0^0 = 1 is _the_ answer in discrete setting, and it's _an_ answer in continuous setting, why don't we just agree that 0^0 = 1 and stop creating confusing situation where sometimes it's defined and sometimes it's not. 0^0 = 1 does not make calculus theorems more complicated to state or prove with modern language. It was a problem in XIX century, when mathematicians did not have a solid foundation with concepts like limit or continuity, but it's over now.

Mathematician here; we do not.

It seems I was a little too bold with my claim. All the mathematicians I know (and I'm a mathematician as well) agree with 0^0 = 1. It's a folklore specific thing, I guess.


I'm a mathematician as well

As am I, by training if not by profession. As is lotharbot. You're in a thread full of mathematicians. :)

Which is what I would expect on this site, actually. I'm always timid making technical claims here unless I'm sure I'm correct; it seems to be a place frequented by arbitrarily large fish.

To answer your question, if 0^0 = 1 is _the_ answer in discrete setting, and it's _an_ answer in continuous setting, why don't we just agree that 0^0 = 1 and stop creating confusing situation where sometimes it's defined and sometimes it's not.

I'm not persuaded it is always the answer. I think the fact that it is an indeterminate form in limits is a forceful enough demonstration of that. It all depends on context. If I came across a 0^0 in, say, an engineering context, my first instinct would be to check whether the formula was defined in that case, not to just assume that 1 would work.

I mean, it's like 1/0. If you're working in R, that's simply illegal. If you're working in R*, it's the infinite point. If you're taking a limit, it means "unbounded". If you're working in my favorite field, the hyperreals, it could be any number of flavors of infinity depending on the flavor of zero it was.

It would be foolhardy to try to define the symbol; without a context to supply some sort of sense, it is nonsense. And that is how I feel about 0^0 as well.


Please, treat exponentiation just like every other function out there. I don't get the whole limit argument at all. Given any function f: R x R -> R, if it happens that a_n -> a, b_n -> b, but lim f(a_n, b_n) != f(a, b), people just say that f is not continuous in (a, b), and the case is over. However, if f happens to be exponentiation function, people instead argue that f should not be defined in (a, b), forgetting about the fact that the theorem which lets you take a limit of an argument instead of a limit of a function values works only under assumption that f is continuous in a proper point. Instead of noting that there's no contradiction because the assumptions are not satisfied, people just run away from it, declaring 0^0 as undefined.

From this point of view, the whole notion of "indeterminate form" makes just as little sense as distinguishing some arbitrary class of functions and calling them "elementary". Why are some points of discontinuity of some functions more special than other points of discontinuity of other functions? Why sin is more elementary than gamma? Historical heritage of confusion, I guess.


Consider this related case: if you evaluate a limit and you get 0/0, you recognize that you need to do more work to find the actual limit. It could be 1, -1, 0, infinite, etc. depending on how you reached it (say, sin(x)/x versus sin(x)/x^2). The issue is not the continuity of x/x; the issue is whether setting a convention for 0/0 would give you the right value for a limit. Since it doesn't always, we call it "indeterminate".

Similarly, if you're evaluating a limit and you get 0^0 you need to do more work. You can't just stop and say "oh, that's 1". It depends on what function you used to get there -- x^x will give you a different answer from ( e^(-1/x) )^x. Again, it has nothing to do with the continuity of exponentiation. The issue is whether the convention of 0^0=1 is correct in the specific part of mathematics you're working in.

The same argument can be made if you're working in the hyperreals, or if you're working with field axioms -- the convention 0^0 doesn't work in that context.

Please, by all means, use the convention 0^0=1 when it's appropriate. But understand that it's not always appropriate. Not every mathematician works in the particular subschool that you do; not every mathematician is going to find your convention appropriate.


Consider this related case: if you evaluate a limit and you get 0/0

What do you mean by "getting 0/0" in the process of evaluating limits?

The issue is not the continuity of x/x; the issue is whether setting a convention for 0/0 would give you the right value for a limit.

Please, tell me - what is the relation between lim f(a_n) and f(lim a_n) ?


> "why don't we just agree that 0^0 = 1"

Because sometimes it's better not to. Sometimes it's inconsistent with our definitions.

Just like sometimes we agree that you can't divide by zero, and sometimes we agree that you can. Sometimes infinity is an actual value (say, in the extended reals), and sometimes it's just a symbol for "unbounded". Sometimes we agree that you can't take the square root of a negative number, and sometimes you can. Sometimes we use the axiom of choice, and sometimes we don't (and you can have an awful lot of fun either way!)

Mathematics is contextual. How various operations behave depends on which axioms and conventions are being used.


> Because sometimes it's better not to. Sometimes it's inconsistent with our definitions.

I'd love to see even one example of 0^0=1 being inconsistent with a definition. The closest I've ever seen is that it bothers people that for reasons of their own had their hearts set on (x,y) -> x^y having no discontinuities...


Fair question.

Perhaps it's more precise to say "Because sometimes it's better not to. Sometimes there is no canonical choice that follows from our definitions, and it doesn't help to assign an arbitrary value that doesn't help solve any related problems."

What is "x" equal to? In general, I mean, not in the context of any equation like "x+1=2". You could say "x=7 in the study of free variables over integers when no other constraints are given", and that is completely consistent with the rest of mathematics, and yet would not be particularly useful and introduces an ugly (philosophical weasel word, yes) asymmetry in the theory (I'd say it introduces a gauge invariance (https://secure.wikimedia.org/wikipedia/en/wiki/Gauge_theory), but I'm really not qualified to discuss that in a rigorous way.)

Someone like Scott Aaronson could put this claim on more solid footing, but I would state that, intuitively, "assigning a value to an indeterminate form leads to a more complex definition of a mathematical system" in some formal complexity-theory sense.


I haven't seen any reason why -e^(i pi)=1 could be considered incorrect. It's consistent with the Taylor Series expansion of e^x. It's consistent with the view of complex exponentiation as rotation. I don't know of any particular problems that arise from taking -e^(i pi)=1.

This is not the case with 0^0=1, which is inconsistent with many limits. That's why 0^0=1 is an agreed-upon convention sometimes. http://en.wikipedia.org/wiki/Exponentiation#Zero_to_the_zero... has a fairly nice summary of the issues involved in defining it.


This is not the case with 0^0=1, which is inconsistent with many limits.

So what? It's only a problem if you want the exponentation function to be continuous, so you escape the problem by leaving it undefined. You could place similar unbased requirements on complex exponentiation to make it seem incorrect. For instance, real exponentiation always gives a positive value for positive base, while complex does not, so e^(i pi) = -1 is wrong. I agree that this is ridiculous requirement, but leaving 0^0 undefined because the math is not as we want it to be (e.g. exponentation is not continuous) looks just as ridiculous and silly to me.

On the other hand, putting 0^0 = 1 makes it consistent many combinatoric formulas, and is also consistent with cardinal exponentation, where nobody objects to 0^0 = 1, when you look at 0 as the cardinal number.


It has nothing to do with wanting exponentiation to be continuous. It's simply a recognition that limits of the form 0^0 are indeterminate, which means a convention that 0^0=1 is not appropriate in the context of evaluating limits. This isn't an argument that it "seems" incorrect, like your bizarre argument about real vs complex exponentiation; it's an argument that it IS incorrect in that context. If you're evaluating limits, you have to treat 0^0 as indeterminate, not as 1.

Even the original article noted that we don't choose the 0^0 convention because it's "correct", but because it's "nice" -- which is why we define it that way in the contexts where it makes sense to define it that way. If you're working with combinatorics, 0^0=1. If you're working with cardinal exponentiation, 0^0=1. If you're taking limits, or working in the hyperreals, or in certain other contexts, the convention doesn't apply. In some circumstances, 0^0 isn't even a valid statement -- like if you're working directly with the field axioms of R.

Recognize what context you're working in, and what assumptions or conventions apply in that context. That's just good mathematics.


It's simply a recognition that limits of the form 0^0 are indeterminate, which means a convention that 0^0=1 is not appropriate in the context of evaluating limits.

But the whole point of distinguishing some "forms" as "indeterminate" is to work around the discontinuity of elementary functions! My absolutely first sentence in this thread is asking, what exactly the "indeterminate form" is. I'm asking this question, because this not a formal notion and you will not find any formal definition of it. Its existence is rooted in the fact that for no reason other than the tradition (and convenience) we use special notation for some functions. Instead of +: R x R -> R, +(2, 3) = 5, we write 2 + 3 = 5. The same goes for ^: [0, \infty] x R -> R. The only reason we have all those fancy limit evaluating laws is because these function are continuous most of the time. For instance, + is continuous everywhere, so lim +(a_n, b_n) = +(lim a_n, lim b_n), if both lim a_n and lim b_n make sense. Similarly, /: R x R - {0} -> R is also continuous everywhere, so lim /(a_n, b_n) = /(lim a_n, lim b_n), if the right hand expression makes sense. If it does not make sense, for instance when both lim a_n and lim b_n are equal to zero, we need cannot approach this problem in such a simple way. Now, some people would call /(0, 0) an "indeterminate form", which makes for me no more sense than calling f(0, 0) an indeterminate form, where f(x, y) = log_(1/x) (y) -- while f is continuous everywhere where defined, you cannot extend its domain to contain (0, 0) for it to stay continuous, just like you cannot do it with / function.

As I repeated many times, the whole affair is because ^ seem to be more familiar than beta function (we have a special notation for it, for instance), people want it to behave nicely, so that for instance it conforms to some arbitrary limit evaluating laws, missing the whole underlying concept of continuity.


The whole point of distinguishing some forms as "indeterminate" is to work around the fact that you're trying to conduct operations on the real numbers that are not defined under the field axioms of the real numbers (or the axioms of the extended reals [-inf,inf]). That's where its essence is rooted; that's why this whole affair exists -- the fact that 0/0, 0^0, 0xinf, inf-inf, etc. are not well defined by our axioms.

This actually relates to all three examples I've presented where the 0^0=1 convention fails. It should be treated as an indeterminate form in limits because it's not well-defined by the axioms of the real numbers; it's also not well-defined by the axioms of the hyperreals, but division of infinitesimals is well-defined in the hyperreals, which gives us an alternate method of computing limits that avoids the "indeterminate form" entirely.

Let me reiterate: 0^0 is not defined under the field axioms of the real numbers. The choice to define it as 1 is a convention which makes certain math easier, in certain areas of mathematics. It is by no means a universal convention; it is by no means the one and only correct definition of 0^0. You continue to argue for the convention, but miss the larger point that it is a convention which is chosen for convenience, and which is not always appropriate.


Exponential function is not defined by the axioms of real numbers, as opposed to addition and multiplication, so this point is irrelevant -- you can define it in any way you want. There's no inconvenience in defining 0^0 = 1, apart from misunderstanding the concept of limits by some people. Defining 0^0 = 1 is universal convention -- people either do it like this, or do not define 0^0 at all, which I'm fighting against.

The whole point of distinguishing some forms as "indeterminate" is to work around the fact that you're trying to conduct operations on the real numbers that are not defined

I am not. Are you? Let me reiterate: the whole concept of "indeterminate forms" (which, I repeat, is not formal at all) stems from misunderstanding the process of taking limits.


> "people either do it like this, or do not define 0^0 at all, which I'm fighting against."

I think it's silly to fight against it. There are circumstances in which leaving it undefined is good, and in which trying to define it as 1 would lead to either misunderstandings (in the case of beginners doing limits, a case you are too quick to dismiss) or actually incorrect (an equivalent problem in the hyperreals could violate the transfer principle).

It's a broad convention, but it is not universal, and it shouldn't be.


> x^y should be zero if you approach (0,0) along the x=0 axis, and it should be one if you approach along the y=0 axis.

Technically, 0^y is 0 only if you approach it from the right: y>0. To the left of 0 it is indeterminate or infinity depending on how you look at it. x^0, however, makes sense for all x and is always 1


Yes. Tying to define f(0) as 0 or 1 won't make it continuous, as approaching from the lines x = 0 and y = 0 will make the limits differ (the definition of lack of a limit).


>most of us don't have a good conceptual model for what exponents really do

Instead of matching math to real world objects (1= one banana, 2 = two bananas, 1+2 = 3 bananas etc. ) and building up to exponentiation, multiplication etc. thereby introducing all sorts of paradoxes, group theory dodges all that and treats the whole thing as a very consistent rule-based system. Things fall into place quickly once the rules are laid out explicitly.

Consider: finite abelian group with only 3 elements a,b,c. Given a+b=c, a+c=a, what's b+b ? Hmmm...okay, if a plus c is a, then c is acting like zero. So b+c must be b. since addition is commutative (abelian gp), b+a must be a+b which you said was c. So now we know b+a=c, b+c=b, so b+b better be a !

Students are easily convinced because you've laid out the rules very explicitly. In fact, they'll try to convince you that b plus b better be a because that's the only way to make the cayley table work out! (http://en.wikipedia.org/wiki/Cayley_table)

There are several books that argue that the teaching of Abstract Algebra must precede Calculus for this very reason. With Calculus, the mapping of math to real-world objects leads to all sorts of messy realities. With group theory, you dodge that mess by simply stating rules upfront.


Group theory is formed by abstracting out the observed properties of number systems. If you want to show that some collection is a group, you will have to do the computations to show that they follow the rules, in which case, it helps to have an understanding of the mechanics of the computation.


> it helps to have an understanding of the mechanics

My claim is the exact opposite. I claim you don't need to understand the mechanics ( just blindly abide by the rules of the group or abelian group or finite simple group or whatever), which is why the approach is better. If you show a monkey red means stop and green means go and reinforce these rules by rewarding with a banana, eventually the monkey will stop when he sees the red. Not because he understands the mechanics of traffic management. Simply because he is abiding by the rules. Similarly, large portions of math can be approached by either the definitional route ( ie. rules ie. define propositions & theorems that logically follow if those props held ) or via trying to understand actual mechanics by mapping everything to real world phenomena ( x = distance, dx/dt = velocity, d/dt(dx/dt) = acceleration etc. ) which are problematic because the mapping breaks down due to the nature of physical reality ( like friction etc. )

How would one explain say Hilbert's 7th problem via the actual mechanics ?

If a is algebraic and b is irrational show a^b is transcendental.

What does that even mean when you map them to the real world ? Instead, the solution is to build upon theorems that logically follow from the axioms you start out with. Problem: http://en.wikipedia.org/wiki/Hilbert%27s_seventh_problem Solution: http://terrytao.wordpress.com/2011/08/21/hilberts-seventh-pr...


You are probably right that such an approach is better at teaching students to be able to crank out solutions to problems, but I think it also reinforces the attitude a lot of students have that math is just a bunch of arbitrary rules that don't mean anything, and is therefore a waste of time. For students that aren't destined to be math majors, the most important aspect of math is being able to map real-world concepts to abstract rules. Being able to mechanically manipulate those rules is far secondary, especially with easy access to computers.


You're right, apart from the fact that real exponentiation does not really belong to group theory -- or even abstract algebra, for that matter.


The notion of a^n definitely belongs to group theory, whether you call that n an exponent or an annihilator or a period is a matter of some contention. See: http://mathoverflow.net/questions/44393/notation-exponent-of... http://mathoverflow.net/questions/32116/exponent-of-a-group


Only if n is an integer. Real exponentiation is something completely different, because it involves all aspects of real numbers - addition, multiplication, order and continuity, which are all interconnected, and the language of group theory is too weak to describe it. For instance, while 2^pi makes perfect sense in the realm of real numbers, it makes none in Z_3.


Hmm.

(Warning, ascii math is confusing and ambiguous to read. Sorr.)

Exponentiation of group "multiplication" does not immediately seem amenable to the reals, sure. But real exponentation does form a group, as shown here:

Define x_g(r) = the function that raises a Real/{0} (non-zero real) number r to the exponent x (in the sense of of some reasonable definition of exponentiaton of continuous functions). Define X = the set x_g() functions corresponding to all reals (including 0)

Define x_g y_g as composition: y_g(x_g(r)) = (r^x)^y = r^ (xy). Then we have 0_g x_g = (r^0)^y = r ^ (0 y) = 1 = r ^ (y * 0) = (r^y)^0) = y_g 0_g -> identity

y_g (1/y)_g = r^y ^ (1/y) = r^0 = 0_g -> inverse

(x_g y_g) (z_g) = ((r^x)^y)^z = (r^(xy))^z = r^((xy)z) = r^(x(yz) = (r^x)^(yz) = x_g (r_g r_g) -> Associativity

That makes a group.

Now, I explicitly left out the 0^x case. Can we fit it back in?

Not particularly cleanly, as thoroughly discussed in this thread.


The group you described does not inherit any interesting structure from exponentiation -- indeed, one can easily see that it is isomorphic to the multiplicative group of reals. You could similarly construct a group isomorphic to an additive group of reals. This is an example of the fact that real exponentiation connects different aspects of real numbers, as well as the fact that just abstract algebra language is not enough to express properties of real numbers. You need to somehow relate the algebraic structure of reals to a topologic one, which stems from order imposed on reals and its continuity.


You write: "With Calculus, the mapping of math to real-world objects leads to all sorts of messy realities. With group theory, you dodge that mess by simply stating rules upfront."

(Prologue: I encountered group theory first by drawing pictures of pegboards and strings to illustrate permutations (before I knew the word!), not by reading the rules up front.)

You are describing a schism between pure/formal and applied mathematics (and between formalism and intuition, to some extent). It's completely cool for you to have an interest in pure math completely separated from applied math, real-world physics, programming etc. It is also cool for someone to pursue applied math,physics, etc without any pure math, but that is sad because a lot of beautiful symmetry and cross-disciplinary value would be last. (Goodbye, encryption!)

I personally strive to connect pure and applied mathematics. After getting burned (in an emotional/psychological way) by chasing pure math study beyond my ability to intuit and apply it, I now commit myself to learning theory and application in tandem. (I'll certainly appreciate the fact that pure theorists such as your ideal have gone several steps ahead and I can study their results without trying to discover them from scratch.)

In fact, my most recent flight of fancy / big dream is to write math/CS tutorials that provide such a tight integration of theory and application, abstract and concrete, general and specific. And I want to use modern web tools (hyperlink, animation, multi-dimensional page layout) to do so.

I'd love to talk to anyone interested in working with me on that :-)


Hi, I'm really interested in developing tutorials / explanations that help merge intuition and rigor, theory and application. You can reach me at kalid.azad@gmail.com.


> Technically, 0^0 is an indeterminate form and has no specific solution.

Precisely, as this is the true mathematician answer: "it depends where 0^0 comes from".

As a f(x,y): RxR->R function, come from the top of the R² plane and 0^0 is 0 but come from the right side and it's 1. Limits and extension by continuity give us this easily enough for fh:x->x^0 and fv:y->0^y.

Writing this I asked myself, what if we came from some funky other path, like the diagonal, or a curve?

h: R->RxR, x->(x, 0) defines "coming from the top", and foh = fh

v: R->RxR, x->(0, y) defines "coming from the top", and fov = fv

d: R->RxR, x->(x, x) defines coming along the diagonal, where things could get interesting.

s: R->RxR, t->(e^(at)sin(t), e^(at)cos(t)) defines coming along a log spiral whose tangent at t=0 is vertical, so fos looks like fun around t=0.

Now what happens if we build a path function p: RxR->RxR, (t, z)->? that endlessly approaches v when z->0? the log spiral with z=1/a as a parameter is a possible one. With such a p function, what does lim fop(x) when x->0 (which is a function of z) look like when subsequently z->0?

Damn. It was supposed to be a two-line comment.


> How would you explain to a 10-year old why 3^0 = 1 beyond "it's necessary to make the algebra of powers work out".

Actually, that's exactly the reason 3^0=1: it was the definition that preserved the most identities. Agreed that this explanation doesn't really help intuition.


You add a bunch of stuff and you get a total. You add nothing and 0 is the total because it's the identity and starting point for addition. You multiply a bunch of stuff and you get a product. You multiply nothing and you get 1 because that's the identity for multiplication.

(That's the same as saying "the rules of algebra work out" but there's maybe something intuitive about multiplying nothing and getting back the thing that doesn't change the result of multiplication?)


Yeah -- that may have been the original motivation, but repeating it as an "explanation" reinforces the notion that math is a bunch of rules (vs. models you can construct and manipulate in your head).

0 probably started as a placeholder symbol for "naught", i.e. nothing to write, and the first scribes were taught "Just write a circle when you have nothing to report".

But, with greater understanding of numbers 0 evolved into its own entity and we saw numbers on a "line", a powerful mental model (why not 2d numbers? N-dimensional numbers? etc.)

Re-teaching that 3^0 = 1 "because the math is convenient" doesn't help us build a mental model of what exponents could be (I know you don't agree with this, just stating it again because the lack of intuitive explanations for math is a major pet peeve of mine).


the lack of intuitive explanations for math is a major pet peeve of mine

I'm a very visual thinker, and that is one reason I enjoy the new Art of Problem Solving textbook Prealgebra by Richard Rusczyk, David Patrick, and Ravi Boppana--

https://www.artofproblemsolving.com/Store/viewitem.php?item=...

it is full of interesting visual "explanations" and substitute for proofs in a book intended for a young audience.

That said, I finally realized that I was limiting my mathematical development by insisting that every mathematical idea must appeal to my visual intuition. Some mathematical ideas are proven even if they don't appeal to visual intuition. In the words attributed to John von Neumann, "in mathematics you don't understand things. You just get used to them."

http://en.wikiquote.org/wiki/John_von_Neumann

That point of view makes a lot of sense to many of the best mathematicians.

One more example of really interesting visual explanations of mathematical concepts is Visual Complex Analysis

http://usf.usfca.edu/vca/

by Tristan Needham. The book is delightful, and well reviewed, but it is not the sole path toward getting used to complex analysis.


Thanks for the pointer! I found Needham's book awesome, I've barely made a dent in it but love the visualizations.

I don't think visualization is the only intuitive method -- you can have a general "sense", not sure how to put it more specifically -- I have a "sense" about growth of e without a specific diagram.

Agreed that not every concept can be understood... yet. There's a quote I love to rail on, in reference to Euler's formula:

"It is absolutely paradoxical; we cannot understand it, and we don't know what it means, but we have proved it, and therefore we know it must be the truth." (Benjamin Peirce, 19th-century Mathematician)

Really? Yes, it may be baffling at first, but we can _never_ understand it? Only if that's our attitude :).


What is true in mathematics is whatever leads to no logical contradictions.


There are cases when both a claim and its negation do not lead to contradiction, but them both being true obviously does.


Thus, mathematics unexpectedly turns into a Choose Your Own Adventure novel!


It's an amusing way to put this, but yes, it's true. An example of such situation is the case of continuum hypothesis. Both it and its negation has been proven not to lead to contradiction, so while most mathematicians ignore it, effectively treating it as being true, many set and model theorists play with things like Martin's axiom, which makes sense only if continuum hypothesis is false.


I just think of it like

X to the 0.5 is square root, X to the 0.3 is cube root, X to the 0.25 is fourth root, etc

Therefore X^0 is what you get if you're taking the 'infiniteth' root

= 1


I've always thought the best explanation was

7^2=7 * 7=49

7^2/7^2=7 * 7/ 7 * 7=49/49=7^(2-2)=7^0=1

But 0^0 was never intuitive to me.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: