I'll never forget when I was interviewing a candidate for a junior C# programming position and the conversation went as follows:
me: Can you tell me what a delegate is?
candidate: <text-book perfect definition, absolutely flawless>
me: Great! Can you tell me when you would ever want to use one?
candidate: <silence for about 15 seconds> I don't know.
It was so bizarre and I still don't fully understand how you can understand the absolutely perfect theoretical definition with having no, literally not one, idea what you're learning.
Personally, I simply cannot _recall_ anything unless I understand it at least superficially. I may be able to _remember_ if prompted, but my memory is simply organized around my own definitions and understanding.
Maybe us engineering types are hardwired this way. I suspect, however, that non-engineering types aren't wired this way due to poor parenting, weak schools, and fundamentally broken reward and punishment systems.
I simply cannot _recall_ anything unless I understand it at least superficially
Me too. That's why the wheels came off my math skills when I got to differential equations. Everything up to that point I'd been able to form a picture of in my head. For example, calculus is just Newtonian mechanics, so that's got a clear analog in the real world.
But Diff EQ was just too abstract. I was eventually able to pass it with the help of a couple friends pumping me full of formulas and procedures, but I never really understood, and so I never could really apply it on my own.
I took a linear algebra class and they explained nothing. Sure they'll prepare you for their tests, but otherwise their material was way too abstract. Not until I watched a lecture by Gilbert Strang on iTunes U did I finally understand what an eigenvalue/vector is and what the hell the matrices represent.
So mcherm, I'm pretty sure you don't need any enlightenment on this, but I'll try to peer teach this to whom it may help.
So let's say we have a vector which is represented by the matrix A of any dimension (any # of columns, each column a vector). If we multiply this matrix A with an arbitrary unit vector (considering just the direction in the given dimensions) B, then we'll have a certain set of possible resulting vectors (A X B). The eigenvectors are whenever the same unit vector B lines up in direction with (A X B)--where both vectors are dependent on B, and the eigenvalues are the corresponding scalars which must multiply the unit eigenvector to match the resulting A X B vector. The eigenvector does not have to be a unit vector, but since any multiple of the eigenvector can still be a valid eigenvector, you probably just want the unit vector or at least the lowest whole number reduction. If you reduce the eigenvector, the eigenvalue must be changed too.
In equation form: Ax=Ωx
Where A is a matrix you are trying to transform, and x is the possible eigenvector, and Ω is the eigenvalue.
And the matrices are simply a different form of a set of equations while still preserving their relative properties. The matrix form allows you to easily manipulate the original set of data with matrix operations. I guess the matrix can also just represent a set of column vectors (which are just a direction in a space with a magnitude) too.
I was in that boat for a while being both of a similar mind and in a similar situation. I got over it by the simple work of being absolutely pulled through the ringer by a diabolical prof on Abstract Vector Spaces.
(btw: I love that professor today. He was just underhanded enough to do what we all needed.)
I'm utterly convinced that pure math needs to be taught at a high level for anyone in advanced science/engineering. It hurts for a while, and then you look back and everything is clear, simple, and bright shining.
I've also heard this just keeps happening if you keep on with math.
I have had the same experience in a lot (all?) of my math classes. I never really understood integrals and differentiation until I saw them come up in a physics class. Once I saw the practical application of those things, I didn't have much of an issue in actually solving them. That's the primary reason why I hate math courses. I know there are practical applications to everything I'm learning, but I just can't see them.
When using math for real world applications, you essentially have three steps. First translate the problem into a math problem. Then manipulate the math problem as needed, then finally convert back the real world problem. But in most classes, the second step is the only part that is covered. This is the part where most people have problems, since people are wired to deal with the world around them, not some abstract math world. Trying to ground the math concepts to real world situation is difficult and complicated, which is probably why this step is skipped.
Very true, they are incredibly useful. But as taught in most class rooms the application is far removed from the instruction. A differential equations class is often not much more than showing some techniques and then assigning numerous quantative problems that use those techniques.
So, taking a differential equations class by itself can make it seem abstract and with limited connection to the physical world.
I simply cannot _recall_ anything unless I understand it
I'm chiming in as well because that is exactly how my mind works. I can remember very little trivia or material I am fleetingly familiar with. However, I do extremely well when applying concepts, and later I can remember intricacies extremely well.
I also have great difficulty presenting material that I don't fully understand, yet when I understand it I am a great public speaker.
I have found that preparing to explain something makes me think and research it more, and understand it better. Some things that I sorta-knew how to work and where they come from became my own intellectual possessions, instruments, and later part of the toolbox, once I got to know them good enough to pass the workings on to someone else.
That third party needn't even exist, once you get the hang of learning new and lesser-understood stuff that way.
My learning style is exactly the same way, I can barely memorize four-character strings unless I can find some sort of mnemonic pattern, but abstract concepts are quick and easy. In a surprise twist of fate, that's why engineering was utterly impenetrable to me in school, while mathematics (my major) was a breeze. I think it has to do with the method of teaching. My required intro engineering courses were rather practical, so I couldn't find enough theory to wrap my head around.
There's evidence that you're right about the correlation with upbringing. Someone did a study on elementary school children, teaching them basic one-digit multiplication, and the concepts of place value, and then had them invent their own algorithms for long multiplication rather than rote-memorize the standard one. The methods they invented were invariably less compact and required more work than the standard method, but the students also performed well above average on standardized tests for multiplication.
I suspect that the high-performing individuals in the "softer" disciplines like literary analysis also function in the same way, just on fluffier concepts and less rigorous abstractions.
I remember the day when I figured out that multiplication is iterated addition. No one had ever bothered to explain that. We were just told to memorize tables. I was at the bus stop. "Three times three. OH! Add three together three times! Don't you get it? That's why it's called TIMES!" My fellow grade schoolers just looked at me. (I grew up in a truly podunk town!)
What? Anyone who understands the evidence for evolution should accept the theory, because it's just that compelling. If there is another explanation for all of the data & experimental confirmations then let us know, I'm sure you've got a paper in Nature awaiting.
Some people who are otherwise reasonably bright have real trouble understanding systems. Interestingly they tend to accept claims about systems that they don't really understand that fit with their social/political associates. For example, liberals tend to accept evolution but question free-market economics because that is what their associates do, and vice versa for conservatives. Those that can understand both tend to become fairly libertarian, though remaining liberal or conservative on many specific issues.
I'm the same way. As a student, a trick I used was to explain the material to an imaginary person in plain English(+). The process of formulating the explanation in your head does a lot to cement your understanding of a subject.
(+) In my head, not, say, out loud while waiting for the bus.
For me I am the exact opposite. You might ask me what a delegate is and I might say, "Uh... I don't remember." Yet it is something that I use every day in my programming, I just don't usually think of it as being a delegate.
I was fascinated when I took my first C++ class and learned that using a pointer was called referencing and dereferencing. I had been using C++ for years on my own and had always thought of it as:
x = &y; //X equals the address of Y.
x = *y; //X equals the value at the address Y.
To me that made more sense than calling it "referencing" and "dereferencing".
Which seemed much more logical, but led to trouble when the internet came along and I started having thoroughly confusing conversations with other people.
Exactly the same. I also never really thought that pointers were difficult yet it seems to be a major stumbling block for many beginning C programmers. Perhaps having already done assembly language programming before I learned C was the reason.
Right, in that same class most of the rest of the students simply could not seem to understand pointers. They asked me, a fellow student, questions like "Why would you ever use pointers?" The thing was, I think my lack of "jargon" made me explain the answer better than the professor did. Hence the other students asked me, not the professor.
It seems to me that fancy terminology such as "referencing" and "dereferencing" is not only not common sense, but it also makes it harder to understand things when you are starting out. Perhaps as professional programmers it makes it easier to talk about algorithms between each other, but jargon is generally for the sake of exclusivity, not for the sake of convenience. I would even go so far as to suggest that anything that can be said with jargon can probably be said better with simple language.
What beginning programmers are learning these days is jargon, and the definition of jargon terms, but they aren't learning what the meaning of those terms is.
There is a big difference between learning the definition of something and learning its meaning.
As you say using it early into an intro programming class may be a mistake, but it's soon useful in that you can say 'don't dereference b, it might be null' instead of something like 'don't attempt to read the data located at address b, b might be a null pointer'.
Are you sure about 'referencing' though? I've never heard it in this context. It could be confusing given how C++ has references has well.
Sure, that's like people being able to talk long before they learn what nouns and verbs are. A matter of learning to use something versus learning and remember terminology to describe that something.
Yeah, I'm not a big fan of the term defererencing. There ought to be a better word for following a reference.
(This question is of practical interect to me -- I'm currently designing a language that has a dereferencing operator deref() and am wondering what better name to give it.)