Hacker News new | past | comments | ask | show | jobs | submit | mr_crankypants's comments login

A preceding B does not imply that A causes B, it only implies that B doesn't cause A. Which doesn't really reduce the world of possible causal relationships by all that much, since there may be any number of other factors that weren't considered.

For example, it could just as easily be the case that adjustments to dysfunctional aspects of the company's internal politics improve both morale and productivity, but the morale change becomes apparent more quickly.


Correlation does not imply causation.

Little known trick to establish that A causes B is to trigger A at will and see if a B is observed.

If you trigger A 100 times and it precedes a B 98 times then you have established causation to a degree. Similar to the correlation coefficient there must be some causation coefficient.


You might imply it, but not triggering A should give less B than triggering A, too.

"Turning on my TV at 5 a'clock causes channel 4 to broadcast The Simpsons."


Good point. If B was just happening all the time, then a B would always happen after A. Have to make sure the inverse is true.

Though if B happens all the time, but also A causes B. Then there's really no way to establish causation.

"The Channel four studio has electrical sensors that will trip and broadcast the simpsons either at 5pm or when I turn on my TV at 5pm."


Sure. There is a subtle difference between "things that make employees happier also improve business performance" and "making employees happier improves business performance", but in both cases you generally want to do things that make employees happy if you want business success (you just might want to be a bit more selective about which happiness-boosting things).


Yeah, confounding* variable C causes happiness and profitability (A and B).

* https://en.wikipedia.org/wiki/Confounding


Local governments, too.

In my neck of the woods, there are departments that have to make do with whole teams of low-skill employees because, while a more skilled person could do the work of at least four lower-skill workers, they would also require two lower-skill workers' worth of salary, and you just can't be paying any one person that much money because that would be Government Waste.


In America, it's probably more accurate to blame government labor unions for the preference of more lower skilled workers to fewer higher skilled workers


Unions tend to prefer fewer workers to more workers for obvious bargaining-position reasons. What they don't like is losing existing jobs.

They also have more bargaining power when they represent skilled workers compared to unskilled ones.


How often do labor unions campaign to impose salary caps on their own members? The very idea seems out of character.


Unions like to set salaries, because there are more mediocre performers who stand to gain compared to free-market wages than top performers who stand to lose. They don't exactly want a cap, but they greatly desire a floor, and accept it also serving as a cap as an acceptable tradeoff.

Whenever that first part isn't true for a union, then the market pressure is for the top performers to leave the union for free-market wages, until it is.


It was my impression, not supported by much, that unions do like to set a salary for a role rather than letting that be up to the company or negotiated between the company and the individual employee. (Unions featuring superstars, like the Screen Actors Guild, are an obvious exception.)

I think it's an interesting question how a government workers' union would be likely to feel about a hypothetical restructuring that lost 75% of existing jobs while doubling or tripling wages. I tend to suspect that if the union already existed and represented those low-skilled jobs, it would be adamantly against the restructuring.

Doubling wages while not cutting jobs would obviously be fine with the union, but what would the point of that be?


Rarely, but if the union official gets to chose between more people paying dues and fewer people paying dues, which are they going to pick?

We know that Unions don't oppose salary caps, but they oppose differentials in compensation based on job performance. Seems like that amounts to the same thing? Unless you're suggesting we just pay government workers more and see if we get any marginal productivity out of them.


I can follow your hypothetical right up to the point where the union official commits career suicide by trying to piss off every member single member of the union in a single stroke, and no further.


It seems that the endgame we're unwittingly asking for is WALL-E.


The hitch in that plan is that only one company can have their P/E ratio justified by being Buy-n-Large 100 years from now.


I suspect that that fact and our beliefs around it are being reflected in market prices, albeit with some heavy discounting for the uncertainty inherent in there still being so many companies out there.

So many people's first instinct to respond by buying more shares of that company, thereby driving its price and market cap higher. That's a reaction that implies that we think that the general trend is toward Buy-n-Large. If we expected regression to the mean to be the driving phenomenon, then we'd be more likely to respond by selling.

(Disclaimer: I'm not suggesting that's actually how things work, just that a lot of us behave as if we think that's how it works, or should work.)


I honestly think that both of those are weak reasons. The first is a problem for other generations; on our own time scale we should focus on the problems that affect us on our own time scale. The second is just not compelling; space exploration is hardly the only endeavor that produces spinoff technology, and it's far from certain that it's the best or most productive way to do so.

There has only ever been one goal that has actually driven us to push our horizons further out into space, and I think it's the only one that really makes sense: We do it for the challenge and for the adventure.

As John F. Kennedy so famously put it, "We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard; because that goal will serve to organize and measure the best of our energies and skills."


I honestly think the first one mentioned will become a "primary" reason as soon as the first largish "came from the direction of the sun" doomsday asteroid slams into us, and makes us rethink some of our priorities.

Assuming we survive the event - and assuming it isn't "The Killer Event".

You know - something largish that takes out 3-4 major cities, and we didn't see until much too late (like the near-space flyby we just had - though it was smaller).

Then again - we are talking human society here - so even that probably wouldn't cause us to sit up and think "you know, we're kinda sitting ducks here" and do something about it collectively.

I mean, look at the number of natural disasters that happen all over the world virtually every year in the same spots, yet do people really do anything to improve their chances next time, or do they say "it won't happen again next year" - and it doesn't, until a few years later when it does.

We're such a short sighted species for these kinds of things, and the dumb thing is, we know for absolute certainty that these events will happen, but because we don't know when, for some reason we decide to put off what we should be doing NOW, because when it happens, we'll either not survive the event (and everything we have ever done was all for naught - a footnote at best), or what remains won't have the means, perhaps ever, to rise to a similar level of technology to prevent it happening again.


It's really hard to conceive of any disaster which would leave Earth less habitable than Mars.


Why does propagating human genetics to a different planet/solar system give human existence meaning?


> There has only ever been one goal that has actually driven us to push our horizons further out into space, and I think it's the only one that really makes sense: We do it for the challenge and for the adventure.

Individuals may think like that, but I think you're downplaying the Space Race. There was immense fear of the Soviet space domination, it was seen as an existential threat. If there could be only one goal driving us to those furthest horizons, it'd have to be conflict.

From the same speech as your quote:

"For the eyes of the world now look into space, to the moon and to the planets beyond, and we have vowed that we shall not see it governed by a hostile flag of conquest, but by a banner of freedom and peace. We have vowed that we shall not see space filled with weapons of mass destruction, but with instruments of knowledge and understanding:

JFK to Congress

"If we are to win the battle that is now going on around the world between freedom and tyranny, the dramatic achievements in space which occurred in recent weeks should have made clear to us all, as did the Sputnik in 1957, the impact of this adventure on the minds of men everywhere, who are attempting to make a determination of which road they should take. . . . Now it is time to take longer strides—time for a great new American enterprise—time for this nation to take a clearly leading role in space achievement, which in many ways may hold the key to our future on Earth"


Challenge ? For the individual and its patrons, sure.

The country or company ? Its lured by the competition and wanting to be the one winning the prize.

But as a society ? When the inevitable "why fund space travel / fundamental physics / ..." question is asked ? Side gains is where it's at.


> I can't think of a single large software company that doesn't regularly draw internet comments of the form “What do all the employees do?"

I also can't think of a single large software company that doesn't, in the long run, survive by either endlessly buying and extracting all the value out of smaller software companies, or creating a moat that makes it almost impossible for new competitors to emerge.

Having worked for a small software company that got acquired by a large one that used the former survival strategy, and therefore witnessed firsthand the subsequent doubling of headcount and simultaneous halving of overall productivity, I can offer one of what I assume are several possible answers: Paperwork. And lots of it.


I don't think large companies just "extract value" from the small ones. They prove that the small company had a great idea and make the idea successful by providing sales, marketing and support that the small.company wouldn't be able to provide (if for no other reason than reduced risk and existing customer relations).


It might depend. In the case I experienced, sales, marketing and support were already in place and quite successful beforehand, and the acquiring company did an admirable job of teaching all three how to do less with more.

To take the example of support, pre-acquisition, the support team had a reputation for being one of the best and most knowledgeable in the industry. That reputation went up in smoke rather quickly when the acquiring company decided to merge support into their existing support vertical and adopt its existing policies, including things like requiring the support and the product development teams to communicate through JIRA instead of using informal channels like instant messages or (in exceptional cases) pulling a developer into the call.

The end result was that the mean time to find a resolution for the non-routine issues went from hours to days. Ironically, while management claimed this would reduce disruption for the development team, it actually increased the time that developers had to spend on supporting the support team, since it made communication so much more difficult. Which, in turn, means that it slowed down both customer support and product development by bogging them both down with paperwork.


Or killing that small company to protect one of their own products.


Scrum, in particular, is a fascinating beast. Looking at how it was in the early days, you can see that a huge motivation was to try to get the non-development bits of the process under control. For example, one of the big ideas behind the whole sprinting thing was to limit the time during which the requirements could be changed to one day in, say, every 10. The whole idea behind "as a/i want/so that" was to try and make it so that a clear motivation and context were being communicated to the development team.

And then, over time, it became clear that, for whatever reason, management just wasn't going for it. So all these different bits and bobs pivoted and morphed and re-scoped into a process by which the stone tries to squeeze more blood out of itself.

IMO, the most valuable bit of Scrum isn't user stories, or story points, or sprinting, or having a Scrum Master, or burndown charts, or anything like that. It's the idea of having a single, designated person whose job is to tell people, "no": The Product Owner.

The dirty secret is, velocity is a crap metric. It unskeptically measures total things implemented, even though we all know that only a portion - I'm guessing often less than half - of those actually needed to be implemented. Meaning that the best way to increase a product team's actual productivity isn't to increase velocity, it's to maximize the average usefulness of features being implemented. Preferably by identifying the useless and less-useful ones ones, and deciding not to implement them. Or, equivalently, identifying the ones that seem most likely to be useful given the currently available information, and making sure those are always the ones at the top of the to-do list.

My suspicion is that, the better your product manager is at that particular job, the less need there is for any of the fancy ceremonies. Because most of those ceremonies aren't there for the developers' benefit; they're really only there to make it easier for the PO to scope and prioritize features.


Scrum has been twisted into a way to gaslight developers by shaming them for their estimating skills when that was project management's job the entire fucking time. It's the biggest dodge going right now. A massive case of deflection and, dare I say, projection.

If the quality of your estimations has ever come up on your annual review, that's them bargaining you down by making you feel bad about yourself.

Someone in a video I watched recently pointed out that story points per week is a graph plotting time on both axes.

One of the earlier agile methodologies (FDD) had one thing figured out: the law of large numbers works just fine for long-term estimation, as long as you can identify the stories, and the range of story 'sizes' is within an order of magnitude (eg, a day vs 2 weeks). You don't have to give a shit if a story is 4 points or 7. That's a waste of everyone's time and especially energy. It's horizontal aggression condensed into a management model. We need to start refusing, as a collective, to engage. The only discussion you need to have is whether this story is less than two weeks, more than two weeks, or way more than two weeks. Those happen at a much lower frequency.


There is one thing I really like about story points: Disagreement about them drives a whole lot of useful conversation, and can reveal communication problems and misunderstandings that are difficult to root out otherwise. That, in turn, should give the PO useful feedback to help with refining the requirements. Which means that story pointing meetings should, in theory, have a huge multiplier effect on productivity, where every hour spent on activities like story pointing saves many hours effort wasted on building unnecessary or mis-scoped or miscommunicated features.

But that requires a very engaged PO who really gets what grooming is really about. Also, I don't know what it is with MBA types, but it really seems like anything that can be turned into a KPI, will be turned into a KPI, without ever pausing to think about whether it makes any sense to do so. And that makes story points radioactive: In the absence of intense and intelligent regulatory oversight, their potential value is more-or-less negated by their potential for abuse and misuse.

Incidentally, this is what fascinates me about the Forth approach: The unflinching dedication to stripping the system down to only the things that you actually need. The problem I see is, the Forth way of doing it seems to assume you're working with an army of one. How do you scale that up to a modern product team that may comprise 10, 50, 100, even 1000 people?


The more time I spend dealing with blowback from excess complexity being imported in the form of 3rd-party libraries that offer complicated solutions to simple problems, the more I think that Chuck Moore was very, very right on that point.


I agree with you, but I wonder what his answer to stuff like GUIs would be. There’s a tremendous amount of complexity and domain knowledge in stuff like drawing fonts, and in cryptography, and so forth — and very very few of us have the time to become competent in even one of those, let alone all of them. Then consider the amount of work necessary to have a modern browser: text parsing of not one but three languages, language interpretation, more graphics, more cryptography.

It would be awesome to get back to first principles, but modern systems try to do so much that I wonder how practical it would be to reinvent them — and I would how practical it is to say, ‘well, don’t do that then.’


I don't know what Moore would say. Personally, I've retreated to the back end - used to be full stack, but I'm just sick to death of how overcomplicated front-end work has become.

I'm inclined to say that, e.g., the modern browser is a cautionary tale that complements the Chuck Moore approach to things: By forever piling thing on top of thing in an evolutionary way, you end up with a system that ultimately feels more and more cobbled together, and less and less like it ever had any sort of an intelligent designer. Perhaps the lesson is that it can be worthwhile to occasionally stop, take a real look at what things you really do need, aggressively discard the ones you don't, and properly re-engineer and re-build the system.

Obviously there are issues of interoperating with the rest of the world to consider there, and Moore has made a career of scrupulously avoiding such encumbrances. But a nerd can dream.


Also consider that all we know is a world that has become more global, open, and relatively peaceful post 1970s. If collaboration were to slow or decline, open-source would be harmed. And/or if Google and Facebook lose its dynamism from politics, regulation, and maturity, corporate sponsored open-source could be shaken. Google could become like AT&T and Facebook like Ericsson or something in some way.

Once unstoppable sectors, like aerospace (to mix comparisons) began to reverse and decline in the early 70s. No one really saw it coming. I can't think if one publicly known or credible person called it in 1969 shortly after the moon landing, at least on record. Oversupply of engineers in the US and the West became a thing. And engineering still suffers here because of aerospace's decline. Forth began to lose steam around then, right? Forth, hardware and Cold War (barriers) politics are inextricably linked, perhaps. And then GNU/Linux and BSD saw its high-collaboration paradigm birthed around that time. Nixon/Kissenger talks with closed China began around then too, and now relations are breaking down with a more open China today.

Look how Lua scripting came about not terribly so long ago. Some parallels. Brazilian trade barriers. Now half believe Huawei is evil. Cross-hardware story may be cracking. Many believe Google is evil. Open software may be cracking. And there are rifts between US, EU, and China on how to regulate the internet. A new Cold War may be brewing. It's a nerds nightmare.

If anyone can tie in distributed ledger and specialized AI coder productivity tools, or something to counter this argument or round it out, that would be awesome.

EDIT: I was mistaken. Forth caught on with personal computer hobbyists in the 1980s, per Wikipedia. However, as a career or industry,slow downs with NASA and Cold War spending seemed to take some wind out of Forth's sails. I've noted that lot of that type of work was what paid people to write Forth. And the open-source paradigm with C/C++ and GNU Linux was even more limiting, I believe.


“I agree with you, but I wonder what his answer to stuff like GUIs would be.”

Couldn’t say exactly, but it’d probably look something like this:

https://en.wikipedia.org/wiki/Display_PostScript

:)


As far as I recall, Display PostScript was display only - what you really want is NeWS which used PostScript for display and for building applications:

https://en.wikipedia.org/wiki/NeWS


Potayto, pohtato… it’s all Polish to me. ;)


Ehh... the reductio of this argument is writing everything in assembler (libc? giant hunk of third party code right there). I surmise that, by comparison, the blowback you encountered was relatively minor.


No, not writing everything in assembler, this isn't about high or low level. It's about writing things yourself for what you actually need.

Because most of the complexity comes from code (esp. libraries and drivers) trying to solve a larger problem than you actually have.

That's the same reason why, when you follow that logic, you eventually write your own Forth. Not because it fun. Not because you want to learn about Forth or compiler. But because my Forth solves my problems the way I see fit, her Forth solves problems the way she wants, and your Forth is going to solve the way you want.


It is entirely and completely about high level vs low level.

"High level" means details abstracted away and solved so you don't have to think about them. Our CPUs understand only the most primitive of instructions; the purpose of all software is to climb the ladder of abstraction, from a multiplication routine abstracting over repeated addition, to "Alexa, set an alarm for 8 AM." To write things yourself is the very essence of descending to a lower level.

Abstraction comes at the price of loss of fidelity, yes - Alexa might not ask you to specify exactly what form your alarm will take - but the benefits are a vastly increased power/effort ratio. It's worth it, because most of the time you don't care exactly how a task is done - you just care that it IS done. And - mostly - your needs are not that special.

Frankly, sharing information on how to do things so that others can build upon them is the only reason we have technology at all. Perhaps you've read "I, Pencil"? With a lifetime of effort and study, you would struggle to create a single pencil drawing from "scratch". Chuck Moore's supposedly astonishing productivity notwithstanding, I notice that all of the software I actually use is a heavily layered tower of abstraction (and, curiously, none of it is written by Chuck Moore). It appears that by and large the choice is between layered, multi-author code - and no code at all.

https://fee.org/resources/i-pencil/


> Chuck Moore's supposedly astonishing productivity notwithstanding, I notice that all of the software I actually use is a heavily layered tower of abstraction (and, curiously, none of it is written by Chuck Moore)

Perhaps you never saw the images from the Philae space probe? Because that's an RTX2010 that powers it, one of Chuck Moore's designs.

Maybe you don't use Moore's software directly, but you never know when it has been used for you [1].

[1] https://wiki.forth-ev.de/doku.php/events:ef2018:forth-in-tha...


There's a significant practical difference between importing the complexity at build time versus as part of the running application. Building on top of a compiler is not the same thing as importing external code.


Software today is developed by teams, not individuals. Systems custom-fit to an individual programmer are next to useless. You need libraries of common code in order to collaborate effectively without duplicating effort.

See also: Emacs, the ultimate customizer's editor, easily shapeable to your particular needs -- and currently losing badly to Visual Studio Code which is only readily customized with configuration options and third-party packages. When you need to pair or mob, having a common toolset and vocabulary beats having a special-snowflake environment.


at least we can get rid of the bloated web apps once everyone begins to do so...but would it be possible if libaries are written in a way that it's easy to just integrate just a portion into existing code?


The problem there is that most "libraries" are actually frameworks.

The difference I'm drawing being, libraries just provide a mess of utility functions. Theoretically, even if your compiler won't strip the library stuff you don't need, you'd be able to take just the bits you need by copy/pasting a relatively small volume of code. And dropping the library would be a small change, that just requires finding replacements for the functions and classes you were using.

Frameworks tend to involve some Grand Unifying Abstraction that you need to inherit from, and that gets imposed on your own code. Things tend to be so tangled together at a conceptual level that it's not really possible to use them in an a la carte manner. Migrating off of a framework tends to require more-or-less a rewrite of all the code that interacts with it.

To take some Web examples: jQuery's more on the library side of things. D3 is more of a framework. React is very much a framework.


Wow that go me thinking. What if specialized AI code recommenders could sniff out solutions. Get away from libraries with objects or structs with methods that mutate. As more people realize composing functions (Forth has concept of composing words, correct?) with fewer side effects is a good thing, I wonder if it's possible. There is some amount of my workflow where I'm looking at StackOverflow, my git project history or others, examples even on blogs (at least when I was new), or my little code snippet journal for stuff already solved. Automate getting idiomatic solutions from a StackOverflow or Github commits of sorts, or something. I know we are no were near, but FB's Aroma and others have the first gen AI recommenders in the pipeline that at a high level do this. That way we are just dealing with code snippets. I've only read Forth code and introductions to it, but it seem all about composition. However this is hard to conceive with today's coding forums and repos because most are gluing mutating library APIs (turtles all the way down) together. So a code recommender paradigm of this sort is chicken vs egg.


The problem here is that pharmaceutical companies' marketing departments have a deep conflict of interest. They're incentivized to encourage doctors to use the drug that makes their company the most profit, not the drug that is best for the patient.

And there is plenty of research demonstrating that this is exactly what they do, and that doctors are indeed swayed by it, because, as you say, they don't have the time to do keep up with it all on their own.

And yes, there is a fundamental difference to consider here: My doctor has a fiduciary responsibility to do what's best for my health, to the best of their ability, and drug marketing compromises that. By contrast, nobody has any fiduciary responsibilities related to which brand of toilet paper I use.


Maybe an alternative to drug marketing would be to have an independent national or international group (e.g. NIH or CDC) inform doctors about new drugs based on their prior prescriptions, and to present the preregistered clinical trials at medical conferences just like any other research result to reach the rest.


At the same time, when a breakthrough happens like with Gilead’s hep c drug, a single governing body is far less likely to educate and inform doctors that a major disease can basically be cured even if all the evidence points this to be the case. Think about how long it took the food pyramid to be changed despite all the evidence that you should be eating six loafs of bread each day.


A drug that good tends to sell itself.

There will be those that don’t change their practice ever once they graduate (seen a few).

Things get more complex when you have dozens of treatments for a condition with no clear winner.


I appreciate how you said “tends to sell itself”.

Before I got in the industry I thought “what the hell does marketing do?”. Then once I saw what they do, I realized, no, most drugs don’t sell themselves, there is too much inertia for doing the same old thing.


The key word was good drugs tend to sell themselves.

If you develop yet another blood pressure drug or cholesterol drug and want GPs to prescribe it, you're going to have an uphill battle.

A blockbuster drug for HepC (where treatments before weren't particularly effective and had ugly side effects) will have a much easier time becoming well known amongst the specialists that treat the disease.


Do you think there could be a substantial time lag in that case though? Sure, some movies become hits with no marketing, but DiCaprio, Brad Pitt, Ana Margo Robbie have been everywhere to market a movie with stellar reviews and a top director. Maybe that is a poor analogy. I’m not in the pharma industry, but to me, I’d hold my doctor accountable for prescribing me something I saw in a tv commercial, or not doing the legwork to understand why it could be a beneficial drug.


Plenty of resources exist. But they’re boring/not flashy.

The government isn’t going to get away with hiring entire teams based on looks and then have a marketing line item for free fancy lunches for people that are definitely not starving.

Choosing Wisely is a broad approach that is somewhat like you described.


My counter to your comment, and my own personal experience, is that doctors are fully aware of the manufacturers conflict of interest. They know that the drug companies will present everything in the best possible light.

Other things that exist to counter this bias are competitors, who will provide a different perspective and most importantly, the FDA that regulates all pharmaceutical promotion for accuracy and will quite swiftly drop the hammer on a company that bends the rules.[1]

[1]https://www.fda.gov/drugs/warning-letters-and-notice-violati...


Yeah, but this starts to break down when the pharmaceutical representatives can give laundered incentives(fancy "educational" dinners, "educational" yacht parties, etc.) to prescribe their product. In the US, pharma reps can see the prescription amounts of doctors to verify that they are actually prescribing their product(last I heard from a pharma CRM company in 2016). This seems deeply and fundamentally unethical.

Then you bundle all that up with the various studies that show that doctors(as with all professions) do a poor job with continuing education so that they are further inclined to take the recommendation from the pharma reps(which can be seen in the roots of the opioid crisis), I don't think we're in a very good place from a regulatory standpoint.


The practice of greasing the skids with lavish events ended a while back. Drug companies can't even give free pens to their doctors now.[1]

[1]http://phrma-docs.phrma.org/sites/default/files/pdf/phrma_ma...


Not to take a position in this debate as a whole, but I just want to interject that there is research suggesting that you actually are more likely to be affected if you are aware of the other person's conflict of interest. (Can't find a link right now...)

If memory serves me right, the theory is that it makes you overestimate your ability to stay objective and unaffected, so you're effectively lowering your guard, or something along those lines.


You should read "the honest truth about dishonesty" by dan ariely where scientific studies of conflicts of interest found folks fall for this stuff unconsciously and repeatably.


>They're incentivized to encourage doctors to use the drug that makes their company the most profit, not the drug that is best for the patient.

The job of the marketing department isn't "best drug for patient". That's the doctor's duty. The marketing department is there to bring awareness of the product to both doctor and patient. There is no conflict of interest at all.


I honestly wouldn't bother for programming, per se.

It's great for learning facts, so, if you wanted to, I suppose you could use it for something like memorizing a pile of library functions. But I'm having a hard time seeing that as an efficient thing to do in the age of Google and Stack Overflow and editors with auto-suggestion.

Or if you're a low-level programmer, and want to memorize a new architecture's assembly language, maybe.

Even on the language side: SRS is great for shoehorning a basic vocabulary into your head when you're first getting started with a language. It kind of sucks for learning anything but the most basic grammar, and for getting a feel for how to actually express yourself in a language, or comprehend the language as it would be spoken by a competent speaker. Comprehensible input is still the go-to method for that side of language learning. And once you get to the intermediate or high intermediate stage, SRS isn't even all that great for learning the vocabulary anymore, because it's hard to really internalize the nuances of more advanced vocabulary that way. You'll go farther faster by just watching a lot of TV and reading a lot of books, where you get to encounter the words living in their natural habitat instead of dead and dried and pinned to a notecard.


Wow, that's a lot of flashcard time.

According to Anki, I'm averaging 10 minutes a day on my language vocabulary deck. It does happen in a series of short, 1-2 minute bursts when I'm sitting on the toilet or whatever.

To that end, another useful thing I've discovered: Reviewing flashcards (and, spaced repetition) is not a good way to learn. It's a good way to reinforce your memory of things you've already learned. The real learning happens when you make the flashcards. The time spent focused on each concept so that you can decide how to break it into a series of smaller factoids that are right-sized for a flashcard, and coming up with whatever mnemonic devices you'll be using to help you remember this concept, is where the learning happens, because that whole process involves a fair amount of turning it over in your head.

Meaning you really shouldn't ever use a pre-made deck. It may not seem like it at first, but it really is normally more efficient to make your own from scratch.


> Meaning you really shouldn't ever use a pre-made deck.

Agreed. The process of making your own deck is too important.

Not to mention, I've never found a deck that was more than 25% applicable to me. Even "advanced" Spanish decks had a bunch of chaff and beginner-content to wade through. And you start realizing you should've just invested the time making your own starting one year ago.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: