>So much of what we call creativity and intelligence is just memory.
This matches my experience very closely. In fact, I find that programmers (and STEM people in general) tend to look down on remembering things since "you can just Google it".
However, after working with some very smart people I have a newfound respect for memory. The problem with "I'll just Google it when I need it" is that often great ideas and solutions are based on information that you remember, but you won't get any of those ideas or come up with those solutions if the building blocks aren't there. It's like trying to build a house with half the foundation missing. Remembering things allows you to build new theories and solutions to problems that you could never come up with if you didn't remember those things.
In the book Black Hole Blues by Janna Levin [0], she mentions that there was one member of the LIGO project who became the "go to" person for any technical discussions because they had memorized large parts of the infrastructure. Having memorized it, they could quickly "debug" issues and evaluate proposed changes. The implication was that asking the individual was faster than having to get together multiple people and then poll them for what they thought the answer might be.
In my own career, I've spent the last 15 years working in finance technology. Many of the systems involved in this industry are legacy (with lots of patches), complex and distributed with all of the attached problems that come with them. Multiple times over that, I've seen people with excellent memories and understanding of the system be able to pinpoint problems just by taking a couple symptoms and deducing the root cause in a few minutes. It really can be a superpower.
The one down side of course is that you build this great mental model for a system that only ever exists at one firm. This can lead the situation where you become more valuable to the firm over time as fewer people remain who understand the system. In turn, your opportunity cost of leaving goes up because you would have to start from zero at your next job.
I don't think that's a downside. If you have that superpower, you can build it at a different place.
The downside I find is that some people who can do this assume everyone can do this. If complexity is not difficult for you then simplifying (code, architecture, etc.) is not a priority and it becomes harder for other people to understand and maintain.
I'm the the go-to guy at a small government agency for the overall network/system architecture. There are some technical & memory-related prereqs (i.e. I have a CS degree, and I'm not a dummy), but mainly I know our systems so well because I've worked actively on them for more than a decade.
It would probably take me a decade to build such a detailed mental model at another company or agency. And that's even assuming I could get hired. One consequence of working on an idiosyncratic, one-off architecture is that my mainstream tech skills have atrophied significantly. I definitely could not pass a leet-code interview at a FAANG anymore.
Don’t write your future self off so easily. Now you know first hand how deeply and broadly you can learn a system.
If life put you into a completely different system, you could build this familiarity and mental map faster. I bet you could—with careful thought—learn in 3-4 years what you learned in 10, and in your first year what you learned in your first 5.
A lot of US-based STEMmers (at least of my era, so late Gen X/early Millenial) had the experience that the rote learning presented through the public education system was tiresome and pointless, and concluded—with some justification—that _all_ rote learning is tiresome and pointless.
This is a really unfortunate outcome, as getting base facts and figures into your memory is actually hugely beneficial. There’s no other way to rapidly pick up a new spoken language, for example.
This is another good point, some things you can’t get around memorizing. I also grew up in an environment where I learned to loath rote memorization. But some things are impossible without a bunch of memorization work. I tried DJing for a bit and it’s just a fact the better you have songs memorized and lots of them the better you’ll be at it. The best DJs I’ve met have a memory of at least the structure of thousands of tracks.
I find repeated use and connection to other (perhaps familiar) concepts far more valuable for memory than rote memorization. Flash cards would help me pass a test in school, but I wasn’t going to remember a year later when I needed that prerequisite knowledge for the next class.
I believe the more accurate thing to say is that many of us got so fed up with rote memorization that we devalued all types of memorization. There are good and bad methods of memorization, experiential memorization is definitely better than rote if you can create the circumstances for it. But some of us just wrote off memorization being valuable period.
Oh how I wish we could just use years like '84-'93 in lieu of obscure generational tags.. I agree with your general idea a lot of people were burnt out of rote memorization.
It's okay to memorize things - but I think if you teach the 'why of the equations' instead of just memorization of them - the memory would lock in with 'forced memorization'
> but I think if you teach the 'why of the equations' instead of just memorization of them - the memory would lock in with 'forced memorization'
I have a simple but great example of that: A decade of schooling and I could never remember the formula for volume of a sphere. And then, one day when I was bored at my part-time job, I pulled out a piece of paper and used what I'd just learned in calculus to derive it. It's stuck with me since, now that all the parts of the equation have meaning.
This is the reason for taxonomies like Bloom's Taxonomy [0] and the DIKW Pyramid. [1] Deep understanding, and the ability to apply a concept, and the ability to draw deep comparisons between concepts, are not the equivalent of having instant recall of many facts.
Early in the learning process you'll have to be able to remember the basic facts of a topic, as this is necessary to develop deep insight, but once learnt, deep insight doesn't depend on perfect recall of the particulars.
It's reasonable for an exam on language theory to expect a student to know that deterministic finite automata are represented as 5-tuples, but students learn about DFAs in order to develop understanding; little is lost if they later forget that particular detail. The ability to apply the concept is what's really valuable.
It seems like the education system is not set up to test deep understanding and rather exclusively focuses on instant recall, though.
At university a lot of my friends would be able to regurgitate answers to questions, but if I tried to talk to them about the "why" behind a methodology they would basically respond with "uhh, I don't know I just do it like this".
I work in curriculum, and things like Bloom's Taxonomy are the bread and butter of how curriculum authors structure content. IMO, the problem isn't that teachers and authors aren't thinking from these angles (they have for decades) it's that in practice, it's really hard to teach and assess skills without ending up testing recall. The solutions that work well (lots of individual attention from human experts) don't scale and the things that scale (multiple choice tests) don't work well.
> solutions that work well (lots of individual attention from human experts) don't scale
I'm really interested in this. Could you please point me towards any reading about the efficacy of such individual attention and/or scaling it with tech?
There is no way to scale this with tech, that's why it doesn't scale :P
The core problem with multiple choice and other trivial tests is that a person understanding the concept will find the correct answer, but people not understanding the concept (and just memorizing, guessing, cheating, ...) will also be able to find the correct answer. You have many false positives (and depending on test quality also false negatives).
The only real solution is a combination of complex problem solving settings and individual evaluation by a teacher or similar role. You have to find out what is going on inside the person's brain, why they chose this or that answer - essentially by talking to the person.
You'd need AGI to tech that.
A step in the right direction would be to abandon single-number scores and introduce more differentiated numbers for different properties of learning, knowledge and so on.
Even better would be assessment of individual learning progress instead of objective result, though this is a hard sell as long as there is a labor market. Employers will want to have an easy metric for comparison of different candidates. Objective scores produce perverse incentives: Say I'm bad at math (school grade E level) , but I am motivated to improve. I work my ass off trying to learn after school etc., and after 3 months I'll write another test. Now I get a D. Objectively still a bad result, almost guaranteed to demotivate me after months of hard work, despite the relative progress being substantial. A grade reflecting the relative progress would reward effort, which usually is what educational settings want to foster.
I appreciate your sentiment and agree that formative assessment can be a lot better. But I would argue that there must be something in between the current situation and AGI.
I've been trying to find literature about this, but am not sure where to look.
I disagree with this. You can just google syntax so I don’t bother memorizing syntax. Specific syntax is never going to be the thing I remember that puts all the pieces together. Remembering how things are done and having a good conceptual memory on the other hand is extremely valuable. You can’t google the long story your colleague told you about the incredibly hard to identify bug (well maybe if they have a blog, but you see what I’m getting at?). That stuff is worth remembering, the stuff you can’t just google. Or the vague notion of what to google to solve the problem. You need both good memory and a good sense of what’s worth remembering and what you can just look up.
+1 to your points. I think the thing to remember is the "patterns" that are novel. Remembering approaches to solving problems, ways of thinking, are the things that help with creativity. Remembering specific solutions to problems isn't as useful, unless they're things you do regularly.
I think the trick is to take a problem and "see" the higher-level patterns in the solution rather than the specifics.
But just spending an insignificant amount of time learning to remember the syntax that you use constantly is even more efficient: why waste time looking up what you already know? Spend that time on the real thing you want to do, instead.
Not remembering what you can look up is the beginner's shortcut, not looking up what you can just remember is the advanced tactic, and is what gets you to fluency in any subject.
You're assuming anyone here seriously resists learning something they use all the time because they can google it, something I have a hard time believing yet you assert otherwise.
In reality, the things you tend to google every time you need it are precisely things you don't actually use enough to memorize. When you do memorize them, enough time elapses each time to where you forget.
Exactly this. Googling to me is long term memory - I can access it, but it takes a tiny bit longer to retrieve. Stuff I use more frequently I do memorize.
As with many other things, the extremes might be clearer but the interesting parts lay in the huge grey area in between. Sure, we might not need to deliberately memorize syntax, but what about memorizing the different ways of writing loops (for the lack of an better example atm)?
My take is that there is value in putting in deliberate effort to remember the key takeaways from our readings or talks we've listened to (among other things), as the original comment pointed out. The blog post is then on how to go about doing it.
Don't ask me, ask the person I responded to. It sure sounded like that was their modus operandus and their counter-argument to the article's "remember the things you are enthusiastic about learning".
It's interesting that whenever memorization is mentioned, there will always be a response like "I don't need to memorize X", when the person never said you should. Your parent did not recommend memorizing syntax.
> You can just google syntax so I don’t bother memorizing syntax. Specific syntax is never going to be the thing I remember that puts all the pieces together.
As you continue programming in a given language, you automatically memorize low-level things like syntax and idioms. That's not something that everyone needs or wants to practice explicitly with flashcards or whatever (that's not how I learn, anyway). But memorization of syntax is absolutely critical to achieve "fluency" in the programming language, just as in human language.
Imagine a monolingual English speaker learning German and scoffing at their classmates' attempts to internalize the grammar rules. You can Google or look all that stuff up in a few seconds! But having to pause and look up every conjugation or word ordering is exactly what prevents the learner from actually communicating clear ideas at a useful speed in their target language.
My biggest problem is remembering the things that are done similarly everywhere: is it .length, .len, .size or .count? Or are they methods? Or globals? Why do these things have to differ even between SQL dialects
This is why I'm such a big fan of good code completion. I don't need it that much in my main language, but when I use other languages every once in a while it's much much quicker than googling these things.
Yes to that! Last semester I had to program in Java, and while having very little specific knowledge of the languague, the IDE (and some basic OOP understanding) basically enabled me to solve the tasks without any troubles.
Java is especially good for that, though. You just add a point to a variable and choose what you want (out of 1000 methods).
> As you continue programming in a given language, you automatically memorize low-level things like syntax and idioms.
I thought so too. Then I ended up with having to write a lot of tSQL. And even though I used them quite regularly, I always had to look up the syntax for things like MERGE, or PIVOT. So I made flashcards, and it has increased my productivity enormously. Not so much because looking up the syntaxes took particular much time, but it always interrupted my train of thoughts. And before I know the syntax by heart I always felt this slight resistance to use these constructs because I knew I had to look them up and would probably make some small mistake somewhere et c.
Mathematics is quite similar as well. I hated memorising formulae and theorems as I always thought if I can derive them then why bother. But the cognitive load when working on more complex problems and having to think about the problem at hand as well as the tools used to solve it just doesn't scale well enough and it limited me until I accepted that a large part of mathematical intuition involves having these as a part of my memory.
Of course, this doesn't need to(and shouldn't need to) happen through rote-learning but rather through a large number of problems that it happens naturally.
I think most of us would agree that you don't have to remember trivia like how bubblesort is implemented or which timezone is Papua New Guinea or some specific syntax in a given language that you use thrice a year, but very few of us would fulfill your strawman that you literally don't have to know anything because you can just google it. The very meaning of this sort of trivia, like which element is #81 on the periodic table, is that it's not really a foundation for deeper general knowledge.
I'd be surprised if I met someone who saw themself in your comment.
You make it sound like bubble sort is trivia when in reality, it's quite useful in specific real world situations. If I need the top 3 elements of a list, bubble or insertion sort are the most efficient ways to do that.
But that's something you wouldn't think about if you didn't know how bubble sort was implemented. Do you see how memorizing this can lead to creativity?
If you need the top 3 elements then neither of those are efficient. Technically insertion sort is more efficient if you stop early, but the straightforward brute force algorithm will be faster by a factor of 3.
They are probably thinking of simply going through the list and tracking the top 3. This gives you linear time but as you noted you can go faster with quickselect (edit: I originally mistakenly wrote selection sort despite thinking of quickselect), giving you linear time (edit: I original said logarithmic time, which is obviously wrong).
I suppose it is besides the point, but you shouldn't be reaching for a specific algorithm but rather using your language's library sort algorithm.
What do you nean with selection sort giving logarithmic time? I mean any algorithm finding anything unique on an unsorted list has to be at least O(n) or resort to magic.
FWIW, I don't consider bubblesort as trivia -- it's literally the simplest and most obvious sorting algorithm. IMO, it's a lot harder to understand more complex algorithms unless you have at least these basics in your head.
Knowing the atomic number of all the elements on the periodic table is also very simple to memorize. The fact that you don't know them all isn't because it's hard nor complex. The same thing applies to bubblesort and every other thing you might not know off-hand without googling.
Watch Jeopardy and ask yourself if any of the questions are particularly hard or complex to memorize. Perhaps that's not why you don't know them.
I've implemented bubblesort at least 10 times in uni. I remember a few of the sorting algorithms I had to impl, but I forget which one is bubblesort. It would take me 10 seconds to go "oh right, that one" if I ever need to see it. I haven't had to in the 15 years since uni. Remembering that bubblesort is algo A vs algo B without googling it the one time I need to know is worthless. Knowing higher level concepts of sorting and perf trade-offs, on the other hand, is not.
Having a set of trivia that you consider essential is itself trivia. As I could have my own arbitrary list of easily googleable essentials, like the wall socket voltage in every country in the world. No matter how handy I claim that knowledge is, it doesn't even pay its rent in the grey matter is takes up when you can just google it.
But you don't know what is at the other end of the spectrum you are on. What if someone knows these algorithms and has very good recall. Perhaps it becomes easier to reason about and mutate these ideas in their heads. Come up with new things, or do algorithmic things faster.
I still think problem solving with newly gained knowledge is the way to go for retaining things longterm, but if you aided that process with something like Anki for meaningful connected information, you'd certainly be better off compared to someone who just does the former
I think of memory, and lookups in humans similar to L1/L2 cache, RAM, Network and Disk.
Knowing stuff cold makes you incredibly fast. Doing a lookup of a problem you've solved in a personal project / elsewhere you have the code for is also very quick, but not as much. Having to perform lookups is akin to having to go out to RAM. Having to ask a coworker is slower still. And the slowest is having to learn something entirely from the ground up before you can process what you are seeing.
It all makes a difference to your throughput, and as you say your creativity!
That's actually completely incorrect. I say this as someone who was blessed with eidetic memory for the first few decades of my life. I could see or read something once and it stuck. I actually trained myself to not remember things that I found emotionally unpleasant and didn't want just popping up. I literally got tired of people telling me how smart I was, because I knew I wasn't. I was just capable of recalling what other actually smart people thought about the scenario in question or one analogous enough to more or less fit. It always felt incredibly empty.
It's clear to me that the actually smart people don't need to recall someone else's clever solution, they just figure it out on their own from the observables.
> It's clear to me that the actually smart people don't need to recall someone else's clever solution, they just figure it out on their own from the observables.
You gotta be able to make connections. If you have to figure out everything from observables, you're gonna be very slow at anything that isn't truly uncharted territory compared to others.
If your memory is lacking, you won't be able to make the leap from "here's what I'm seeing right now, here's a vaguely similar pattern from that past, what does that suggest?"
That's where "just google it" breaks down too because if you don't know that the symptoms you're seeing are characteristic of a race condition, say, you won't know to google for "race condition"!
How did you learn to forget specific things? That skill seems just as interesting to me as learning to remember specific things but I've never heard of anyone doing it, probably because eidetic memory (in adults, at least) is rare.
If you don't remember something then your brain can't do pattern matching based on that information. And pattern matching is the basis of any thinking process. So, yes, Google won't help you with that.
But, you can retrieve and memorize just patterns from the information and forget the details. This is my way of thinking, I believe. Still far from optimal, because you can miss important patterns before forgetting the details.
All of the Very Successful People who I've had the privilege to work with have been disgustingly well-organized and disciplined, and had almost supernatural memories, including memory for fine detail like the specific values of relatively inconsequential numbers they saw six months ago in many cases. I can't help but wonder if these two things are related.
The greatest advantage of memory is that you can connect what you learned today to what you learned before and thus map out the structure of the world.
When I got into programming it took me about 10 years before I could search anything online, the best I could hope for was getting something out of the local library.
Even BBS were available only to the selected few that could afford bloody expensive long distance calls.
I remember being very meticulous about my learning process and tried to build a sandcastle of knowledge. I had notebooks for every subject, I went over them every so often just to be sure I don't forget stuff etc. I was living in a 19th century time-frame and hoped to be one of those autodidacts, Renaissance men, polymaths who knew everything there was to know about the world they were living in. Romantic and vain dreams, that don't match our times.
I grew out of it and the truth is similar to Sturgeon's Law: 90% of everything is crap! So, why waste my time on trying to memorize stuff I don't really need, just to satisfy my own superiority complex and to feel like an erudite man? Now I'm using Just-In-Time approach: If I don't know or can't remember something, I will look it up.
That's only true if you memorise facts at a surface level. If you're into history, then knowledge builds context. It gives you an increasingly higher resolution understanding of what was really going on.
Some knowledge must also be tied to real world experience. It took me a while to get the hang of navigating with the sun, but now I do it subconsciously.
The same can be said about programming, cooking, gardening, woodworking etc. You can't Google it when you need it, because it's a collection of techniques you pick up over time, with practice.
JIT is what I learned in my higher education in the end, together with the rise of Google.
When learning programming, I had a book on Java; ended up reading the first two pages and doing the exercises, after that everything was a quick google away. So much Javadoc.
10-15 years on, pretty much the same. I feel like I've forgotten more than the renaissance men had learned by the time they got to my age. I think it's easy to underestimate how much you know now, compared to the romanticized 19th century fellows. And we're fairly common people, whereas those people were often in the higher circles of society (I mean a lot seemed to spend their time taking long walks to chat or write letters to their peers, doing science and philosophy and shit without having to worry about income. That may just be the romanticized view / biographer's fault though)
>I think it's easy to underestimate how much you know now, compared to the romanticized 19th century fellows
well, going back to the Renaissance men they might have known a greater percentage of available knowledge, but I don't know that anyone has figured out how much of that knowledge is wrong - obviously if someone is a polymath with a deep knowledge in alchemy, that deep knowledge should perhaps count as something of a negative.
It's a skill to notice when to simply look something up.
It's much easier to just get frustrated and blame some humans. That makes the time between an incident and simply looking something up even longer. Some people might never simply look something up as their frustrations exceed their lifetime.
Few days back I stumbled upon a video about Ramanujan [0] and the: 1 + 2 + 3 + 4 ..(to infinity) = -1/12.(Turns out the proof in the video is incorrect or at least not concise enough, because you can't do that with divergent series). So that got me curious and I wanted to understand what's going on. How can 1 + 2 .. etc equal to -1/12 and not infinity. So I went on small binge watching videos to get an intuition and understanding about this. This led me to Riemann Hypothesis and Euler's identities and complex numbers etc. Now I have a list of books that I want to skim/read to get a historical context around these problems. I don't take any notes just yet, because I don't know what is important and what is not, I just jump into the stuff and hope that when I come out of the other end I will at least have a understanding why it equals -1/12. It's like panning out gold, there might be some gold nuggets. And maybe, just by pure chance, I come up with some ridiculous ideas how to disprove Riemann Hypothesis ;P (I think I have to revisit Gödel, Escher, Bach and Gödel's Incompletness Theorems to prove that it's not provable).
I'd recommend you study some real math for this. Popular science articles and books are great entertainment but are distinctly different from the real deal.
You don't become a surgeon by watching Dr. House or a historian by watching the history channel. Don't expect to understand math by reading a wikipedia article. To really get it, you need to dive deep and get your hands dirty.
To add to this, people underestimate how long it might take understand material on a proper level.
My favorite quote in Axler's linear algebra done right was that if you take less than an hour per page, your doing something wrong. I agree with him.
Don't get me wrong, learning math is definitely worthwhile, but please don't think you can skim some books and gain profound insights without hard work.
Yeah, I think JIT only works when you have an Internet connection or at least, a smartphone. Before them, I simply had to remember more stuff, now I can look up details any time (and my own notes don't take hundreds of A4 pages that I loathe going through anymore).
I think it's messed me up a bit, tbh. All I really remember now is the conclusion/result of a previous multi-hour/day learning quest. Why or how I arrived to that conclusion, I don't know. I just know that at the time I deemed it to be the correct answer ¯\_(ツ)_/¯
> If I forget everything I read, I can’t apply my knowledge to the problem at hand. I can’t transfer it.
Which is true, but I'm not really sure your solution is the best way to solve this problem. When you learn react, you are not trying to memorize the full book, you are trying to learn to be able to code in react which is a really different thing. The world champion of french Scrabble doesn't speak french [1]
To me you are over engineering the learning process. Learning benefits a lot from forgetting, there is no point trying to memorize a full book so that's normal to forget most of it
There are benefits from memorizing a full book but not for the case you are showing as an example (learning React or js)
Thanks for your comment! I agree with you on the benefits of forgetting and that learning how to code is way more about practice than theory. I should have added that my primary use of the process is to understand concepts well rather than memorize how to write JSX.
I would not say that there is no point trying to memorize a full book. It is an inefficient use of one's time, sure. However, I believe it can have some benefits. Typically, the process of trying to memorize (assuming one does not do it in the dumbest way) leads to a deeper understanding of some subtlety of the subject.
Spaced repetition definitely works (and is much better than nothing), but I found it to be quite dull.
Recently, I've been reading "The memory code" by Lynne Kelly and found that most of the ancient cultures faced the problem of memory and used memory spaces (aka "method of loci").
I tried it out recently to memorize the timeline of evolutionary history while walking in city streets. I found I could memorize 1-3 items per minute, which very high recall hours and days later. The structure between items was readily apparent, so that I could, for instance, easily estimate that there were ~650 million years between the first terrestrial eukaryotes and first appearance of sharks.
The streets have acquired new meaning and the process was really quite enjoyable. With this tool, I actually want to learn the structure of many more things.
If people like these things, I recommend The Memory Book by Harry Lorayne (he's written a few other memory books as well). I read the book and tried its techniques when I was young.
Interestingly, I started doing spaced repetition more recently (2 years ago), and I found it more effective than the method of loci or anything else in his book. Yes, I may remember things using the method of loci days, or even months later, but usually not longer than that. Spaced repetition is a system that forces you to recall things later. Probably one could couple the two, though. Maybe if a SR system forced me to recall something I learned with the method of loci, I would claim the latter to be effective.
I really did like the "phonetic alphabet" method in his book, though. I use it occasionally to remember things for the short-medium term.
I suspect the method of loci works because our minds seem to remember routes through spaces better than lists. If you’re curious about spatial cognition, watch this intro to the subject from Barbara Tversky: https://youtu.be/gmc4wEL2aPQ.
I was worried about this, as I don't have a great knack for directions. However, I found that binding items to locations on the street actually helped me remember the layout of the streets as well.
Also, you don't need to use a whole street, you can encode a lot in just a room or two!
--
Relatedly, I know some people say that they don't have visual imagery: https://en.wikipedia.org/wiki/Aphantasia .
I'm really curious how navigating spaces works for them. Do they have a problem with directions? Could they use memory spaces, but without visual imagery?
So I am pretty high on the aphantastic scale (It sounds better when I conjugate it like that).
I don't have a car license, despite growing up driving vehicles on farms and years of trying to get my car license, I can't comfortable drive a car because I cannot hold in my head the physical size and shape of the car. I have a full motorcycle licence and no problem with a bike, but as soon as I step up to a car I just can't estimate distances or space. (This cost a lot of fence posts on farms and could be deadly on a highway).
Navigation I'm mostly fine, because I can remember lists reasonably well. To get from my house to my sisters is left, left, right, left (straight for 30km), right, left, right, left. Do I turn the final right at the 4th or 5th street down the road? I never remember until I'm in the intersection and recognise it.
I've tried a memory palace, and it only works for me with a space I know really well and a limited range. For example the 10 rooms of my house I can use, I've lived here for 15 years. I couldn't use different parts of the rooms because I just can't visualise them. But since I can generally remember short lists pretty easily I don't find a 10 room memory palace helpful.
Lastly I, like most people I suspect, remember songs and rhyme better. Want to know the periodic table? I can sing it at you (Thanks ASAPScience).
The worst part about aphantasia is trying to explain it to someone who hasn't got it. I can recognise my favourite places instantly in a picture, but ask me to describe them and be preparted for a cubist impression. Heaven help anyone who is relying on me to complete a police identikit...
Question: how would you change your method if you didn't have 3 hours to spend every day on pure, focused learning? What if you didn't know how much time you had, or if you had time at all, and your time came only in very small bursts of free time?
When I was at the university I had all the time in the world for learning, and I was (artificially) incentivized to learn and retain a lot. Therefore my learning method was really different, rich, deep, optimized for the available time and the broad task.
Since I started working, my free time has shrunk by ~90%, and all the incentives for understanding deeply anything have gone to nearly 0. Now I can't afford having a big time slot just for learning, and I changed myself from being engineered to learn and retain theoretical knowledge in bulk to just be able to learn on the fly only the things that I need to use (this has been a positive change, actually. It was a painful change but I realized university had me optimized for something useless for practical purposes).
More and more I'm not even trying to retain in any way the small bits of knowledge, documentation, etc., that I encounter. I just read something up, I use it, and I forget it. For example, sometimes I program in programming languages that "I don't know". I just read up the syntax for the constructs I need, search stuff on stack overflow, and then forget everything.
Before being able to consider applying your method as a learning method, one has to overcome the time scheduling problem of finding 3 straight hours. Your blog has a "Why I wake up at 5 a.m." entry. I suspect that's what makes it possible for you in the first place?
Maybe, but how many people actually read the whole thing? Personally I'm almost exclusively here for the comments and rarely ever click into a link. Maybe that makes me a bad internet citizen, but I don't think it makes me unusual.
Personally I'm almost exclusively here for the comments and rarely ever click into a link.
On HN, I often take a look at the comments first, to get a feel for whether the linked material is likely to be informative or thought-provoking. If there is an interesting discussion starting or if there is a submission that is getting lots of upvotes yet few comments (as this one was when I first saw it this evening) then that's usually a good sign.
FWIW, I've just spent well over an hour reading the article here all the way through, following some of the links to further pieces by the same author and others, and thinking about the ideas. It was probably the most interesting material I've seen all week.
Thanks for linking this. I've been reading through it and I've found it very interesting and somewhat inspiring so far. I'm especially interested in his suggestion to use Anki to memorize things in your personal/social life, past events, and hobbies. I might look into these more creative ways to use Anki as a tool for establishing long term memories of things that are actually important to me.
How do you answer your Questions at the start of the next session? Are you answering based on learning that occurred since the question was written or are you looking up the answers? If the former, do you roll Questions forwards until you are able to answer them?
Hey really enjoyed your post! I was hoping you could expand a little on exactly what sort of things you write in the second file. Are you writing down every "on topic" thought that floats through your head, or are you mostly writing down thoughts after reacting to something prompted by the thing you're learning? The example you have seems to be more the latter, i.e. stuff like "Now let's fix this spaghetti in the render method".
Is there a particular reason why you use 3 files instead of one where you (for example) prefix question lines with a Q and random thoughts with a T? It seems to me that the cost of constantly context switching between files might get in the way.
No particular reason; I have a shift+[ shortcut in Drafts that makes the switch between recent files almost instant. I’ve done both and ended up sticking to a separate file with questions.
Follow up: How do you quickly import the questions that you write in the plain text file into Anki? And how do you organize information into different decks?
I have one deck only because of mixed practice (ref: https://youtu.be/cwaWHeyK_aM). As for import, it sucks. Currently I just copy & paste them on my iPad.
I used Anki to remember anything. It worked the best. It solved the purpose.
Then I read Erich Fromm's "To Have or To Be" - and it changed me. It made me question if I was just "having" all that knowledge or really "practicing" it.
It felt good to know lots of interesting facts of chemistry, physics, geography and code. But I just "had" them - never "enjoyed" them.
Will highly recommend the book. For a moment, pause and re-think, why do we need to remember everything?
If you look at most advice given on using spaced repetition well, the standard advice is "Don't learn random stuff" and usually some other advice on ensuring what you put in your cards is meaningful to you.
For me, there is a bit of effort required in creating the card. That alone makes me only put in things I think are meaningful to me. Lots of times I learn something interesting and think to myself "I should make a card out of that", but it's too inconvenient. As such, most of my cards are for things I am intentionally studying.
I'm a mapper http://wiki.c2.com/?MappersVsPackers I hold many ideas, some of them contradictory, in limbo in my brain until they match up with others, and my map of the world gets a bit larger and better connected.
I strongly suspect you are a packer... and wow, that's a lot of work.
My approach to getting a skill is to dive in, recklessly, and pound away, until I've beat the technology into submission, and it does it exactly what I want, how I want.
I strongly suspect this approach wouldn't work for you. (Unless you did it a lot, and become a mapper if that is even possible)
Lots of good ideas here and I'm probably going to experiment with some of them as I more deliberately build out my own process.
I have one major gripe that I think is worth pointing out: it may behoove you to be more selective in applying this rigorous treatment, if you aren't already.
You sort of touched on this with your ideas about sampling books before committing, realizing that most stuff is probably not worth reading. Similarly, most knowledge is not worth running through a high friction retention process.
Every unit of time you dedicate to a particular learning task has an opportunity cost. There's a roughly multiplicative factor applied to this cost per task with every additional layer in your retention process. You really want just enough process to get close to an optimal point on the tradeoff between 'exploiting' particular knowledge versus 'exploring' other things.
The above is also why I've started to conceptualize the learning process as a funnel. At the top end is inbound content in the form of books, blog posts, videos, etc. The bottom is what becomes indelible learning and enhances understanding. There are several stages in this funnel and what flows through each stage should narrow (both naturally and as a consequence of process) as you progress. With this in mind, I would argue that there should be at least one or two intermediate funnel stages between inbound content and the process you've described.
This is pretty much the method I settled on while studying as well, but I use Obsidian.md just for linking while doing my "metacognition" as the author calls it. The important thing for me is that while in my 25 minute block of study that I don't go hunting for those links to my other files though, sometimes, they just come to mind. To me, that's a signal that that particular note has 'stuck' if I'm recalling it while writing about other things.
I have found getting the things I want to learn about into audio form (if its not already a podcast), be that by finding recordings (or striping it from videos with newpipe or youtube-dl -x), or using ebooks or OCRing books to text files and playing them in a tts app, and then putting headphones on and riding my bike(being sure to still be able to hear traffic) or rambling around in a forest or mountain, is a very good way to make things memorable. It's a low effort loci method I guess. It's never as good as sitting down an purposefully making a memory palace and then doing recollection practice, but it is much less energy to do.
Its also good for things you cant be bothered reading. This morning I listened through the H1B visa thread this way. (and heard some interesting bug in the HN site where the posts started to be "one million minuets ago" then "buffer overflow minuets ago" ..)
Great read, I kind of do this but on just one page in a notebook. Splitting it up into 3 'areas' (for want of a better word) makes a lot of sense and makes those notes a lot more useful, thanks!
I think a few commenters are missing the point or just skimming over the post. It's not about memorising a 'book' or subject by rote, it's about being able to consume, rearrange and remember the subject in a way that's more suitable to how you remember or think about things.
It not only clears your thoughts as you study (allowing you to move through the material quicker) but aids in recollection with better organised note taking.
Cheers
This post touches many topics I learned in a MOOC about meta-learning: focus time (there's also the important diffuse time, eg shower thoughts), recall, practice, forming long term memories, resting. I highly suggest the MOOC, it's free without certification, and you can start any time:
Realizing that one forgets the majority of what they learn day-to-day is a harrowing realization. It was for me, at least.
I built this simple web app to help me keep track of what I learn, and later opened it up to the general public. I've gotten good feedback so far - interested to hear more: https://wyl.today
In college, a very long time ago, I read a strategy of remembering things. Imagining your house. And in your house you would place a specific thing you were trying to remember in a specific place in that house. I'd put formulas in a drawer.
Similarly, I've been putting meeting notes, life decisions and reading notes in RoamResearch in the hopes that it can replace my "crappy" natural memory at least a little bit. But I haven't really collected any fruits from it (a few months now).
it's a really interesting post, and I really like your approach to studying. Thank you for detailing it so clearly.
I've never used Drafts and I'm curious about how do you keep your files organised after a session. You write a sort of recap of what you've learned, that's clear. But how are the other 3 files grouped together? Only by date or timestamp? Do you ever go back to or search through your old files?
Also, another curiosity:instinctively I would keep my study material on the iPad and take notes on the laptop. What's the advantage in doing it the other way around?
mild resentment at the "you'll just memorize the tutorial and your knowledge will be shallow"-- seems mildly judgemental and contradictory of "life's too short to do things that don't interest you"
We don't always get a choice in not pursuing what we don't care about. Being aware of the fact that unless you explore a subject beyond what it says on the page, your knowledge is literally shallow knowledge is an important thing to know.
Shallow knowledge isn't a moral failing either. Most things I do for fun benefit from shallow knowledge of something else that I don't find intrinsically interesting enough to study deeply.
If you're exploring and tinkering, it's OK to go broad and skim trough things quickly. But if you aim for understanding, depth is required. Don’t see a contradiction here with the "life I short part"; it's more about curiosity (exploring) than studying (exploiting).
That doesn't seem contradictory to me. There's nothing inherently wrong with shallow knowledge. Shallow knowledge is only bad if you apply it or advertise it as more.
Knowing when you need depth and when it's enough to just mimic someone/something is a career skill.
Thats pretty dismissive of the point of the article which focuses on quickly synthesizing their own interpretation in notes, setting up systems for spaced repetition, and filtering what books to even use this system on (by skimming free samples online before purchasing).
It's not about memorizing useless information, but about how to remember all the ideas you've learned so you can actually use them later. Especially if you're learning a lot all the time
I guess that depend on whether those digits were meaningful information to them or not. Because if they're "just digits", then there's nothing to remember.
This matches my experience very closely. In fact, I find that programmers (and STEM people in general) tend to look down on remembering things since "you can just Google it".
However, after working with some very smart people I have a newfound respect for memory. The problem with "I'll just Google it when I need it" is that often great ideas and solutions are based on information that you remember, but you won't get any of those ideas or come up with those solutions if the building blocks aren't there. It's like trying to build a house with half the foundation missing. Remembering things allows you to build new theories and solutions to problems that you could never come up with if you didn't remember those things.