This is akin to how a society comes to understand something and why genius ideas sometimes take so long to become accepted. I believe "context" is the underlying principle here.
Arthur Schopenhauer said, "All truth passes through three stages: First, it is ridiculed; Second, it is violently opposed; Third, it is accepted as self-evident."
If you present a truth to someone whom doesn't have sufficient context for what you are saying, it may seem outrageous and ridiculous to them because the gap between their understanding and the insight you presenting is too great.
They would have to build up their understanding of the context around it until it expands to a point where they find a connection to what they already know. Then they can start to relate to it and eventually they may see it as self evident.
This is how Richard Feynman approached problem solving -- he wanted to connect new ideas to what he already understood and understand the context of everything around it:
"It's not quite true that Feynman could not accept an idea until he had torn it apart. Rather, the idea could not yet be part of his way of thinking and looking at the world. Before an idea could contribute to that worldview, Feynman wanted to turn over the idea, to see why it was true, from any angle that he could find...In other words, he wanted to connect a new idea to what he already understood and thereby extend his understanding" (http://www.freakonomics.com/2011/04/08/how-richard-feynman-t...).
Once you surround a new concept with enough puzzle pieces, it attaches to what you already know and then eventually it becomes obvious.
> I remember that in 7th grade when I tried to teach myself programming for the first time, I didn’t realize that you were supposed to reuse variables. I would just create a new variable for every single value I needed to store. Boy does that seem stupid looking back.
What? Making new variables is better. The compiler will generally figure out the scope of variables and re-use the space taken by variables.
This tends to suggest that at least for him, the functional/immutable meaning of variables was more natural.
Wow, that is indeed impressive. I admire the determination it must have taken to get to 38000 lines of code for a three letter game. Was this text-based or graphical (I can't remember -- did gwbasic even support anything beyond basic CGA video modes?)?
Edit: Wait, was that BASIC line numbers reaching to 38000, or 38000 actual individual lines?
Erm sorry, it was a 38k source file of text. I was actually hard coding everything because I didn't understand it. the whole screen would get printed out (without any sort of looping) each time you guessed a letter.
GW Basic did let you do basic graphics, I eventually got around to emulating a side scroller at one point. Obviously it was just the foreground, but it was quite a lot of fun.
Ha! Being obvious to you does not mean you truly understand it.
At the start of my PhD my argument was obvious to me, but it's taken many years of hard work to turn that implicit understanding into an explicit, proper understanding (and I'm not fully there yet).
The more I've learned and studied various areas, the more I've come to realize that true mastery is to really, really understand the 'basics' of a given field. If you can really understand those foundational aspects of a field, all else can be derived almost through intuition. Unfortunately it takes a few years to realize that you don't in fact know the basics, and many more years to really 'get' them ;)
I think that hints at what I would consider a more useful criterion for true understanding: being able to apply something in new ways. Any programmer can learn to implement a linked list by being beaten over the head with it in a data structures class until it seems obvious. Someone who truly understands it will be able to pull out concepts like pointers from it and reuse them when confronted with another problem, like making a hash table that can handle collisions.
If there really were no magic, we'd have real AI by now.
That's not at all clear. It could well turn out that more processing power than is easily available is necessary to run the experiments that would give us the answers to how intelligence works.
By virtue of P probably not equaling NP, I think the "more processing power" argument is ridiculous. To think up a new math theorem using exhaustive search of the space is simply unthinkable, no matter how much computing power you have. Some currently unknown form of creativity must be applied to reduce the search space considerably.
I think it possible that we will not understand intelligence until we can run and tweak intelligent software. Fortunately, we don't have to do anything like an exhaustive search of the space because we have a working example of intelligence that we can copy without having to understand how it works. Doing this will require very large amounts of computing power, but not computronium.
Fortunately, we don't have to do anything like an exhaustive search of the space because we have a working example of intelligence that we can copy
My point was just that anything besides an exhaustive search of the space is applying some sort of "creativity magic."
Also FWIW, I believe that in order to simulate a brain on a computer we will essentially need to know how it works, which we are no where near. That is my opinion as an AI researcher, but there are certainly others who know more than me and disagree.
Good article. I've had very similar experiences looking over my old code from several months ago. Even basic concepts like procedural abstraction I've only learned about and put into practice recently; in the past I may have used it but wasn't consciously aware of it and didn't do it consistently. And yet I was able to write a lot of awesome working software (that was hard to read and maintain).
Sometimes I think I would have been better off with a little theory at first, but then I wonder if I would have even developed the passion without the immediate joy of creation.
interesting, although there's plenty that seems obvious but upon deeper inspection one realizes he doesn't understand it at all. maybe that's true understanding?
quick example - you hold a ball and let it go. "obviously" it drops, but think about it - why did it drop and I'm pretty sure even the most advanced string theorists couldn't explain definitively why it went towards the ground.
"true knowledge exists in knowing that you know nothing" - socrates
Here's one that relates to your Socrates reference -- "true knowledge exists in knowing that you know nothing".
It's a Charlie Rose segment where Jim Collins is discussing his book "How the Mighty Fall" (http://www.charlierose.com/view/interview/10565). He talks about the five stages of decline in any great enterprise. Stage 1 is hubris -- thinking you know it all.
I don't think you can get more obvious in this day and age than adding "with computers" or "on the internet" to mundane, centuries-old ideas like "showing people things similar to what they are shopping for".
To be honest, I think your argument supports the abolition of patents more than you think it does. EVERY idea was built on the back of previous ideas. "Novelty" is only a matter of degree, and subjective degree at that.
Sigh... It seems author doesn't yet have the breadth of experience to know there are many different levels of understanding appropriate to many different fields, tasks and kinds of mastery.
Agreed, I may never "truly understand" how a programming language works, but there is a moment of clarity when you can say you really do understand why they did it that way. There are, of course, multiple levels to that. You may not know "how" they did it, but you do understand why aside from repeating what others have said on "why" they chose that approach.
Arthur Schopenhauer said, "All truth passes through three stages: First, it is ridiculed; Second, it is violently opposed; Third, it is accepted as self-evident."
If you present a truth to someone whom doesn't have sufficient context for what you are saying, it may seem outrageous and ridiculous to them because the gap between their understanding and the insight you presenting is too great.
They would have to build up their understanding of the context around it until it expands to a point where they find a connection to what they already know. Then they can start to relate to it and eventually they may see it as self evident.
Jeff Jonas has a great metaphor for explaining context in terms of puzzle pieces and how it relates to big data (see this short TechCrunchTV segment - http://www.techcrunch.tv/watch/s4ZnZyMTrtWTaKSxWF2WEPPXkBtMj...).
This is how Richard Feynman approached problem solving -- he wanted to connect new ideas to what he already understood and understand the context of everything around it:
"It's not quite true that Feynman could not accept an idea until he had torn it apart. Rather, the idea could not yet be part of his way of thinking and looking at the world. Before an idea could contribute to that worldview, Feynman wanted to turn over the idea, to see why it was true, from any angle that he could find...In other words, he wanted to connect a new idea to what he already understood and thereby extend his understanding" (http://www.freakonomics.com/2011/04/08/how-richard-feynman-t...).
Once you surround a new concept with enough puzzle pieces, it attaches to what you already know and then eventually it becomes obvious.