But without any way of correcting for this silent majority, or any sort of predictable systematic tendency, so what? We don't know everything? There's a lot of things we don't know, and on most topics we hear from only a small fraction of people, both expert and otherwise. Why is this worth pointing out?
(The point of knowing about things like publication biases in science is that they are systematic: once you know about publication bias, you know that estimates are on net, higher than they should be, and this is something you can apply to evaluating science that you read.)
I think his target audience here is the {PHP,Java,C++,etc}-sucks crowd. Don't believe you know the global consensus on any technology just because the vocal minority (places like HN) seem to have a consensus. He even closes the article with the following:
> Your time may better spent getting in there and trying things rather than reading about what other people think.
> He even closes the article with the following:
>
>> Your time may better spent getting in there and trying things rather than reading about what other people think.
But collectively, those people have spent a lot more time on the technologies, in many more situations, than I have used them or am likely to use them. How am I better off ignoring them and throwing away data? How does listening to them make me worse off?
When I was looking at statistics languages, I didn't spend a year trying Stata, a year trying SAS, a year trying Julia, a year trying Matlab, a year trying Panda+Python, a year trying R; I just looked at what people were using and blogging about and opinions on them, and picked R. How would I have been better off ignoring all of the community discussions and picking on my own? What systematic tendency causes the discussions to be literally worse than random noise?
The passage you quote doesn't say "worse than random noise", it says "worse than trying it out". I'm not sure that's true but it is a substantially weaker claim.
>But without any way of correcting for this silent majority, or any sort of predictable systematic tendency, so what? We don't know everything? There's a lot of things we don't know, and on most topics we hear from only a small fraction of people, both expert and otherwise. Why is this worth pointing out?
Because it tells you that the more vocal people are usually full of BS, and you should take notice of what the silent experts and does-of-stuff practice. Who said there's no way of "correcting for this silent majority"?
> Because it tells you that the more vocal people are usually full of BS
How's that? All the more vocal people tell you is that they're... more vocal. Where is the evidence that the more vocal people are correlated with systematically being wrong in a predictable and correctable way?
> you should take notice of what the silent experts and does-of-stuff practice.
And how do you do that when they are silent?
> Who said there's no way of "correcting for this silent majority"?
>All the more vocal people tell you is that they're... more vocal. Where is the evidence that the more vocal people are correlated with systematically being wrong in a predictable and correctable way?
I'm sorry, if you want a verifiable proof with papers and references I don't have one for you. A lot of things in life don't have such proofs.
But my empirical experience has been that tons of people with no idea what they are doing are ranting on blogs and tutorials and such, where hardcore programmers I know are silently getting far more impressive shit done.
Let's say this: how many of the iOS, the Android or the Windows Phone teams blog, vs those working on the respective platform?
>And how do you do that when they are silent?
Watch them work, work with them or fucking talk to them. That they don't write blog posts like superstar primadonnas and that they don't give talks at conferences and rant on HN doesn't mean that they are impossible to find or unable to speak when found.
> I'm sorry, if you want a verifiable proof with papers and references I don't have one for you. A lot of things in life don't have such proofs. But my empirical experience has been that tons of people with no idea what they are doing are ranting on blogs and tutorials and such, where hardcore programmers I know are silently getting far more impressive shit done.
It's a common argument - "Lots of people use this but don't talk about it! It still matters!". Common Lispers sometimes say that (but we talk a lot. ;-) )
On the other hand, you can't make decisions without data - without discernable activity, it's reasonable to assume a community is dead.
But the non-expert bias online is systematic. Time spent participating online is time not spent honing your craft, and so there should be a strong inverse correlation between posting frequency and expertise.
There are real, pragmatic ways to correct for this bias and get more useful information out of online forums, too. For example:
1.) Look for people who have a solid, independently-verifiable track record who are now just starting ventures that need publicity. For example, Marc Andreesen had an absolutely awesome weblog in the ~1 year prior to founding Andreesen-Horowitz, but now his comments are largely limited to snarky one-liners and occasionally insightful one-paragraphers. Why? There's no incentive for him to spread his knowledge around the general public; his firm already has enough of a reputation to draw the top potential founders. The entrepreneurs he funds get his advice, but everybody else has to make due with occasional soundbites.
2.) Look for people who post only brief, offhand comments, but then follow up on those comments and do the research yourself. Many "silent experts" may have time in between compile breaks to throw in a throwaway comment or correction, but they don't have time to write a long missive. However, if you follow-up yourself and do a bit of Googling, you can take their clues and learn what they were talking about. This is how I found out about Haskell, it's how I found out about writing scalable event-driven servers, and it's how I found out about writing multi-language systems where a scripting language is embedded inside a larger program.
3.) Look for people who can see & acknowledge both sides of an issue. Practical experience teaches you about trade-offs, it teaches you about alternatives, and it teaches you that there are often multiple solutions and oftentimes you need to give up some desirable properties to get others. Blog posts by Internet Fanboyz teach you that there is One True Way Of Doing Everything that will solve all your problems, because that is the only way they've ever encountered.
4.) Similarly, look for people who stay out of flamewars. Folks with real jobs who care about their craft don't have time for that shit, because becoming an expert takes a lot of time. So the folks who do have time for that shit are generally either folks without jobs or folks who blow off their jobs to score points on the Internet.
5.) And perhaps most effectively - work directly with an expert. Start contributing to open-source and understand why the maintainers make the choices they do. Take a job at a well-respected company. Work with the gruff neckbeard at your employer. When experienced programmers have to clean up the messes you make, they have a very strong vested interest in not letting you make any messes.
"Time spent participating online is time not spent honing your craft, and so there should be a strong inverse correlation between posting frequency and expertise."
Anecdotally, this is absolutely wrong. I can think of almost no one who participates online, who isn't better than most people who don't participate online.
Maybe it's an issue of averages, and at the edges this is true - that the average "superstar" spends less time online, but that the average "OK" person does spend time online.
But I can certainly say, the silent majority of programmers, the ones who don't take part in anything except just focusing on their work, are almost always worse. I've seen this time and time and time again.
I think your perspective may be warped by two things: not having worked with excellent engineers in person, and only considering relatively famous devs in your set of people who "participate online" - e.g. you're excluding the denizens of the last 80% of pages on StackOverflow.
One of the most capable engineers I've ever worked with is a guy called Yooichi Tagawa. The guy has an incredible appetite for complexity, as well as spooling up on old codebases and new technologies. But you'll find very little by him online, both because he's Japanese and doesn't use English often, and also he's squirrelled away inside Embarcadero, working on Delphi compiler as he's been doing for the past 15 years or so.
It's certainly possible. And there are definitely a great many devs who don't participate online but are brilliant, don't get me wrong.
But I still think the average person who does participate online is better than the average person who doesn't.
In retrospect, what bothered me was the implication that participating online is detrimental, when I think it's just the opposite, if for no other reason than that people who do participate online tend to care more about programming than people who don't, and are therefore better.
Also, availability cuts both ways. Don't think about the amazing devs you've worked with, think of the averages. Trust me, I've worked with people of wildly, wildly varying ability, I know what's out there. Both on the amazing side, and on the "can't code worth a damn" side.
The problem is that you're missing a huge swath of deep (as in, learned over decades of practical work) knowledge that exists within the community of engineers -- not just programmers, but all types of engineers -- in the 40+ age range, who have been working and learning and applying in industry since before the social/online boom, and may well be uncomfortable participating in the same way as the current generation tend to. Generally speaking, and I know this is an unfair generalization, that generation tended to have an approach to problems of "let's reduce this to it's core pieces and see what we know, then experiment until we understand it well enough to solve the macro- problem" In other words, the scientific method. My experience with folks who've entered the market in the past 5-10 years is that, when they first receive a problem, their inclination is to search the web for possible solutions and then hack other people's code to address the 80% easy part of the issue, while often completely ignoring or potentially just misunderstanding the more difficult 20%. This kind of behavior is one of the reasons company culture is critical, and employing senior, experienced engineers & technical managers who have at least a partial focus on engendering critical thinking and problem solving skills in their younger team mates.
Point of reference: I manage a team of about 100 programmers, analysts and DBAs in India, Mexico, Brazil, Scotland, and the US.
I think I'd probably agree that the average person who participates online is more skilled than the average person who doesn't. Participating online exposes you to other perspectives and lets you test your experiences against others, after all.
And I wouldn't say that participating online is detrimental. However, I stand by my contention that participating online is not the most effective use of your time if you want to become a better programmer. Programming is. It is a more effective use of time than, say, watching TV (which a fair number of average devs would do), but the information density you learn from grappling with a real problem and trying to solve it with software far exceeds the information density available in blogs and Hacker News comments.
I can think of such people. In fact, i don't think any of the really excellent people i've worked with blog, or spend much time in forums, or use Twitter for technical stuff. Some are involved with local meetups. The rest are just getting on with things.
On the other hand, i routinely come across blog posts or HN comments by people who clearly have much less expertise than them.
> Time spent participating online is time not spent honing your craft, and so there should be a strong inverse correlation between posting frequency and expertise.
I disagree. Many people (including myself) have observed that qui docet discit, and there is little better way to teach yourself than to discuss and debate and work with other people. This eliminates your claimed strong inverse correlation, and the rest of your suggestions simply become ways of finding additional information and not corrections for any systematic bias in available experts.
> there is little better way to teach yourself than to discuss and debate and work with other people
Not the OP. But I know a decent number of people of the sort he mentions. These people do actually discuss and debate stuff with other people, but often in person or in rather specialized forums. They don't have a solid presence online and almost never show up in general forums.
(The point of knowing about things like publication biases in science is that they are systematic: once you know about publication bias, you know that estimates are on net, higher than they should be, and this is something you can apply to evaluating science that you read.)