Fascinating use of "Big Data" to cut through the bullshit. Wonder if it will change anything. I suspect the "tough" interview plays well into a company's PR.
>I suspect the "tough" interview plays well into a company's PR.
from: "The trick Max Levchin used to hire the best engineers at PayPal"
Levchin realized the best engineers wanted to be challenged both in their jobs and in the interview process. “We cultivated a very public culture of being incredibly hard to get in. Even though it was actually very hard to get good people to even interview, we made a point of broadcasting that it's incredibly hard to even so much as get into the door at PayPal. You have to be IQ of 190 to begin with, and then you have to be an amazing coder, and then five other requirements. The really, really smart people looked at it and said, "That's a challenge. I'm going to go interview there just to prove to these suckers that I'm better." Of course, by end of the conversation, I'm like, "Maybe you want to come get a job here because you're pretty amazing.”
Looking for people with high IQ's with the right background is basically a waste of time. There are plenty of ways to define IQ's but 160 is around 1 in 30,000 and there are only something like ~200 graduating highschool each year. If 5 percent of them study programming you looking a say 10 new genius programmers every year. And plenty of them avoid SV for reasons as simple as the limited dating pool.
The simple truth is large companies end up with a few geniuses randomly but there rare enough to not be worth optimizing for. What companies really want are people willing to work ridiculously hard for little reason and that's what 'hard' interviews are optimized for. O your willing to put up with hours of BS on the off chance we will higher you, great let's just see how you like 60h+ weeks.
My dad was part of a company in Boston that only had employees with an IQ of 140 and up. I asked him how it went and he laughed and said it naturally fell into ruins. Key point: there's a lot more to employees than just quantifiable numbers like GPA, IQ and such.
To add: Genius isn't even a predictor of success. Granted, intelligence is a useful tool, its only one of many things that govern success. Sometimes "genius" can be in impediment to success.
Humans are hive minded despite our best attempts to assert otherwise. A modern computer or a modern bridge can't be built from scratch by one person. A genius can certainly help raise the bar, but a cohesive unit capable of achieving the end result is going to be a better optimization strategy.
IQ by definition fits the bell curve. A given normalized IQ test is only good for a given rang and time period and when incorrectly interpreted outside of that range will produce excesive people with high IQ's. Also a test that's accurate +/- 10 IQ points is going to bump more people from 150 to 160 than drop people from 160 to 150 simply because there are more people at 150 than 160. Not to mention the tendency for people to pick the highest score vs the average.
Yes, you can define a concept as a normal distribution centered at 100 with a standard deviation of 15. But if you claim that this concept can be measured with IQ tests, and then the observed distribution of IQ scores doesn't match the predicted curve - it's the curve that's wrong, not the tests.
The studies that the "fat tail" results were from all used the Stanford Binet L-M test, which has a ceiling of around 230.
Sit back and think about this for a second, if you define IQ in some other fashion you can't really have multiple IQ tests just the score on one specific test. As to having problems with a specific test, just because a specific laser rang finder has issues does not mean we need to redefine the inch.
PS: There is no way they had enough data to support an IQ rang up to 230. Do you bave any idea how many people you need to sample to have 50 people with an IQ between 205 and 220.
Do you have evidence for that? I've worked on cognitive ability testing as it related to workplace performance, and have never seen anything that deviated dramatically from a normal distribution, especially at the high end.
Google [iq fat tails] and there're a bunch of articles on it (and one comment I wrote here about 4 years ago). The original data source for most of the articles is Terman's 1921 study of high-IQ people; they've plotted out the observed frequency of Terman's data against a normal distribution and found that it deviated markedly after about 3-4 SD.
Some additional Googling seems to have found some other independent studies:
I'm curious what sort of population your work draws from. The results above showed that IQ follows a normal distribution until about 140; other papers I've read indicate that IQ correlates with life outcomes until an IQ of about 140, and then appears completely uncorrelated. If you're studying workplace performance, I wouldn't be surprised if a good fraction of high-IQ people simply aren't in the workplace. (See eg. Christopher Langan.)
The citation I was thinking of was from Daniel Goleman's Working with Emotional Intelligence, where one of the findings presented was that a moderately high IQ (usually in the 120-130 range) is often a prerequisite for entering a demanding profession like doctor, lawyer, or computer programmer, but continued success in the field depends more upon emotional skills like confidence, perseverance, resilience, social skills, and leadership.
With a bit of Googling, I've found some other support for this, including the Terman study:
"Our conclusion is that for subjects brought up under present-day educational regimes, excess in IQ above 140 or 150 adds little to one's achievement in the early adult years." - Louis Terman, 39th Yearbook of the National Society for the Study of Education Part I, pp. 83-84
> ut continued success in the field depends more upon emotional skills like confidence, perseverance, resilience, social skills, and leadership.
Eh. That simply sounds like the correlation weakens a bit, but is far from the claims people make like 'IQ is irrelevant'.
> "Our conclusion is that for subjects brought up under present-day educational regimes, excess in IQ above 140 or 150 adds little to one's achievement in the early adult years." - Louis Terman, 39th Yearbook of the National Society for the Study of Education Part I, pp. 83-84
Terman may have thought so, but with the full dataset this is clearly not so. Check out http://www.iza.org/conference_files/CoNoCoSk2011/gensowski_m... 'The Effects of Education, Personality, and IQ on Earnings of High-Ability Men', Gensowski et al 2011; IQ never stops mattering, even if personality factors start to matter more.
(Always funny how people can look at a study which goes something like 'X correlates .4 and Y correlates .3, but in the top 1% by X, the correlations are .3 and .4 respectively' and go 'X doesn't matter!' Says something about what they want to believe about X, I think.)
Just remember that the distribution of the test results depends on the tool itself as well as testing conditions. So I would say that such anomalies are rather the problem with the measurement than characteristic of the population.
I have to agree with this at some level. I have seen startups trying to pull this type of crap. I haven't figured out why, it could be because they are trying to create "elite culture" that just doesn't fit or they are trying to seem more special so you will put up with BS and 60h+ weeks.
Dunning Kruger does not mean that unskilled people think they do better than skilled people. On average, the more competent people at any skill will rate their abilities higher than the less competent people. See this graph[0] from the original study[1] for a better explanation.
I meant it the other way round: That some smart people won't have that "I show 'em" attitude and won't even try because they think they are too bad anyways.
earlier in the interview he talks about preferring avoiding false positives over missing out on false negatives, so this practice does fit into their philosophy.
You see yourself as a "super-amazingly competent genius engineer" yet lack self esteem?
What would stop you from applying at a company with a hiring PR strategy as described by Levchin? They aren't looking for confidence in all fields of life, just the technical job related part...
What's the point of hiring a super-amazingly competent genius engineer if he's going to sit in a corner and think he's wrong all the time? If you don't offer your genius to the team, you might as well not have it.
And no, parceling out work for you to go and excel at on your own is likely not a good use of your skill.
I'm sure loads of amazing projects throughout history have been pulled off with great assistance from amazingly competent people who were hesitant about their abilities. It'd be interesting to find out if more teams are successful with arrogant or humble people. Perhaps you're just more likely to hear about the successes of arrogant people thanks to their loud trumpeting about themselves.
What a pile of bullying bullshit. If someone is skilled, they are skilled completely apart from whether they are constantly advertising themselves and putting other people down as worse than them.
What's the point of hiring a super-amazingly competent genius engineer if he's going to sit in a corner and think he's wrong all the time?
Think of it this way: someone who is concerned with not doing the wrong thing is going to be more sure that what they do come up with is the right thing, that they aren't reinventing the wheel badly.
Yes, but someone who is paralyzed by fear of doing the wrong thing is never going to invent the wheel, forget about whether or not it has actually been invented yet.
imo such nice people do well by depending on nice friends who can introduce them to competent, well meaning teams (I know a few here and do my best to play matchmaker)
Btw I noticed you like C# (from your profile) so I automatically like you haha :)
That seems to be a good thing. They get together after lots of rejections and start a sleep little company that goes on to impress the world. Either that or they stay in academia. (eg Niklaus Wirth)
This story is a great example of what happens if you mix up employer branding with recruitment process. Attracting talent is one thing, while defining what actually "talent" means for your company (in terms of competencies required for the job) and how to measure it - now that's completely different story. In an extreme case you can end up with ultra-smart, over-qualified people who will be disappointed by the boring, uninspiring every-day tasks they're assigned to. The goal of recruitment is not to get "the best people", but to get the people who will consistently deliver business results.
I've seen this misconception twice on the thread now, so I'm calling it out: IQ is not normally distributed. It has fat tails: the extreme ends of the distribution are much more common than a normal distribution would suggest. Some quick Googling indicated that Terman's data on high-IQ people (1921) showed IQs at 4 SDs (160ish) are about 15x more common than a Gaussian would predict, while at 5+ SDs (175-200ish) they can be up to 1000x more common than a Gaussian.
IQ is actually normally distributed by definition. Actual intelligence, if such a thing exists, may not be. The reason for Terman's finding is that there aren't any IQ tests that are valid for people with extremely high IQ.
Then you get into "What's the definition of IQ?" You could argue that IQ is a theoretical construct that's defined to average 100 with a standard deviation of 15 - but then, if you can't measure it with any tests, and when you do try to measure it the tests come up with different numbers, what's the point?
In my physics courses, my professors were always very careful to stress that "If the theory says one thing and the data says another, it's the theory that needs to change." (Well, unless it's a student lab report that measures the speed of light as different from the commonly accepted value. ;-))
I agree with what you're saying, my point is just that all of the current IQ tests (including the ones Terman used) were not designed to be measures. That is, someone with an IQ of 200 isn't twice as smart as someone with an IQ of 100, which is what the term 'measurement' implies. (C.f. the book Measurement In Psychology, which was recommended by tokenadult a while ago.) Rather, they are designed to compare people relative to one another. In other words, regardless of whether or not there is some underlying thing called IQ, no one (to the best of my knowledge) has ever tried to measure it.
Big data is a misnomer. Complex data is a better description. Having a terabyte of simple data with 2 columns is really not that difficult to analyze and won't give you much information. Whereas having a few hundred mbs data with complex relationships and many dimensions can yield tons of information and is far more difficult to analyze.
Difficulty in "big data" should be about its horizontal breadth (covering many aspects of a system) rather than its vertical depth (covering one aspect of a system in great resolution).
The devil is in the details. Big Data is really a massive cluster of VMs running maxed out Excel spreadsheets, and instrumented to restart automatically and restore from redundant backup, a la RAID, when the Excel process crashes one of the Windows VMs.
Current limit is 1,000,000 pr worksheet. However there is a tool called PowerPivot which lets you get around that limit and do analysis on larger data sets.
He's saying his file has way more rows than that, so maybe they upped the limit in the more recent versions of excel? (I think he also wrote a bunch of VBA and hooked into some external systems too)
True, the point is that many people writing these stories cannot really tell (or care) about the difference. "Big data" is a sexy definition, so they go with it regardless of wheter it's actually relevant.
Most of the people here do, so these comments are really preaching in the wrong place...
Probably not, but the year that I joined they had processed a million resumes. So they probably have some level of data (ranging from phone screen only to on-site interview) on anywhere from 8 to 12 million engineering candidates. For the folks who have come on site there might be a 5 - 8k words of text in their file for phone screens probably less than 1K depending on if they include a code sample or not. Most of the folks they processed at the time didn't get to on-site interviews so it probably skews to the lower end.
Its "not" big data in the sense that it needs a cluster to process but it is a pretty large sample set of the current population of engineers who might want to work there.
They've reported receiving 1million applications per year. If even a fraction of those get interviewed (with 1-5 interviews per candidate) that's a good chunk of data. Correlate that with regular performance reviews of 30k employees... I'd say that's a small Big Data problem.
He's not talking about 30k rows, he's talking about 30k people. It could easily be big data if you monitor & document their every working moment, but they probably aren't doing that so you're probably right.
1 million applications received. Say 10% of those go into some sort of evaluation process = 100k assessments/year.
Say 10% of those go through an interview panel of (on average 3 interviews) = 30k assessments/year
For 30k employees with (say on average) 2 assessments per year = 60k assessments/year.
So 1 million CVs per year on which to do some sort of evaluations, and 200k individual assessments per year. Over the past five years that roughly 6 million data points.
Since there's no hard-and-fast rule on this, that's why I called it small Big Data.
Definitely not. And it's a good example of how useful POD ("plain old data") can be. They ask 6 team members 18 questions about what they think of their boss and give those 108 datapoints to her and it's tremendously valuable.
I find the use of the term "Big Data" there bullshit. Even for the largest company like Walmart with 2 million employees - having some data about every one is hardly "big". Collect a whole deluge of data about each and you hardly fill a USB drive.
I realize that reporters like to throw buzzwords into anything to cater to the "simpler" readers. But come one, this is outright silly.
I don't know, I'm starting to change my opinion of this. I used to think 'big data' meant anything that didn't fit easily into a RDBMS. At least petabytes.
But, more recently, in conversations with non-programmers, I see that 'big data' to them, means 'broad data' - it means trying to track everything possible and make sense of it. The average business user is really excited to be able to cross-relate disparate types of data - in an effort to make things better. 'Big data' enables the breaking down silos and enabling of cross references. It's about making empirical decisions based on data rather than opinion or intuition. That's really good, in my opinion.
So, 'big data' in that way is more amorphous than just the size of the data. With services and networks, the question becomes where does the data begin and end? Big data is potentially everything.
> I hate it when terms start morphing into unrelated interpretations by means of public drift.
That's how language has always worked. There are people who are still uptight by the current "misuse" of words like "awesome" and "hopeful", but those of us who grew up with different meanings in common usage mostly just shrug.
For technical terms, usually I can live with words having domain-specific meaning that differs from common usage, but "theory" is one I still can't get over. It causes too much miscommunication.
Walmart's DW was 2.5 petabytes in 2008; undoubtedly larger now. Rumor had it that they were storing every line item from every POS receipt since the early 90s, but they probably don't have all that data online. I would think it needs to contain POS data, SKU inventory and sales at every store and distribution center, tracking of vendors, orders, shipments, truck logistics, etc. Even weather reports (remember how they predicted Poptarts would sell more when hurricanes were forecast?).
eBay has a 9 petabyte DW that cuts across all of the types of data on their whole site: listings, bids, feedback, categories, clicks, etc.
Sometimes big data is actually big data, both in terms of raw size as well as complexity.
I asked these questions until a year ago. Brain teasers were never ok. They are defined as things that require a single insight and/or domain knowledge and could be communicated in a few seconds.
"Monopoly" is a perfect example of a brain teaser and anyone using that would be treated pretty harshly by the committee that reviews interview feedback. Estimate questions are not: there's no expectation of a "right answer" and the important fact is the working.
Some programming questions border on brain teasers to non programmers but that doesn't matter because you are asking programmers and again, it's the working that matters.
Finally different roles get different types of interviews. I worked in PM and there were analytical (these questions), product (design a better x) and technical (basic engineering interviews). The behavioral type was not one I encountered in PM or eng but maybe used elsewhere.
Are we thinking of the same "Monopoly" question? The one where you ask them to sketch out how they would program the game Monopoly? Because that's a lot more like an estimation question than a single-insight brain-teaser.
Not really surprising at all. A huge amount of selection has already taken place by the time you get to the interview, and most of that selections is based either on grades or on other metrics that measure similar traits. The “low-GPA” individuals who get an interview are therefore very different from the general population of “low-GPA” individuals, and one expects there to be little or no correlation between GPA and success (if anything, I would expect negative correlation, as low-GPA individuals who get interviews will have something else going on that got them there).
This is almost exactly the same effect as the fact that SATs do not predict college GPA. SAT scores are used as one of the major factors for admission to colleges; having separated the students into cohorts based (partially) on SAT, it is completely expected that SAT scores have minimal correlation with grades assigned within each cohort.
The sorting continues in college; SAT scores are most predictive for the first year, before less-capable students switch out of the hard majors. My guess is that nearly all of the engineers Google hires score very highly, and in that sense the SAT is indeed predictive of success there (since you can take it in middle/high school), but they might have determined that discriminating on the high end between differences of <1 SD (about the validity between retakes) might not tell them much.
Actually they say they're giving less weight to them (historically Google has weighted them very heavily) which is significantly different from "test scores are worthless".
You've skipped over the original NYT article, which the qz.com article being linked to here quotes from. He does say they're worthless. Here's the complete quote:
One of the things we’ve seen from all our data crunching is that G.P.A.’s are worthless as a criteria for hiring, and test scores are worthless — no correlation at all except for brand-new college grads, where there’s a slight correlation
This chimes with my understanding and experience that Google only really use test scores and GPA's right now as a filter to manage the vast number of internship/entry-level applicants they get.
Makes sense, most companies only care about university grades for first jobs (because there's very little else to distinguish candidates at the stage).
I quit my job after 5 years as a programmer for my Masters. I must say, just working on one small class project with any candidate can help you find a world of a difference. I think its the experience and my personal interest that drives me towards worrying more about actually learning stuff instead of scoring grades. Getting a grade is more about identifying what a Professor expects and giving it to them. There are many students who slack off their effort in team projects. And, the worst part, the slackers spend a lot of time applying and preparing specifically for interviews, so as I see the ones who get the "best" jobs are generally the ones that I would never hire them if I was looking for candidates for my company.
On the bright side, this article will boost the self-esteem of the 100s of thousands of devs like me who didn't get a job offer. "I'm not stupid, their process was stupid!"
It shows several stories at once, not just the google brainteasers stories.
2. In general, people would rather read the real article instead of a summary of the article. When someone submits a summary of an article to a site like HN or reddit, it is usually flagged as blog-spam because we'd rather read/support the original content than a summary with questionable value.
3. For long form articles, nothing beats reading the print-preview page to get rid of all the sidebars, comments, ads. Look at the print preview page: it is not possible to get less distraction free than that. Any other format has more distractions.
Even aside from that, the New York Times has some of the best information architecture in the business. These are the guys who did NYTProf. Their web team is awesome.
4.2: I see a vertical scroll bar in the middle of my screen on Firefox.
4.3 The black header bar which is fixed and stays on the screen all the time even though it conveys no useful information to me.
4.4: A bunch of text blurbs on the left side of the screen that convey no useful information to me.
You say you're trying to be as distraction free as possible, but that's not actually true because it isn't possible to have your business model and be as distraction free as possible. The print preview page is as distraction free as possible.
1. ... It shows several stories at once, not just the google brainteasers stories.
It shows one article initially. It will load the next one as you scroll down and approach the end. This is not counted as a Page View unless you actually continue down into it - you'll notice the URL change at that point)
2. In general, people would rather read the real article
instead of a summary of the article.
I would argue that this is a "real article". The NYT piece were 8 questions and answers. This article is based on just one of those questions - and expands on it. I'm not an editor/write so I'll avoid going deeper but thats my take-away.
3. For long form articles, nothing beats reading the print-preview page...
Tru dat.
Even aside from that, the New York Times has some of the best
information architecture in the business. These are the guys
who did NYTProf. Their web team is awesome.
I used to work there :)
4. Some visual issues I had with quartz:
4.1: No left/right whitespace around images.
The Featured Image (between Headlines and Text) is meant to be full-width to a max. Inline images should have left/right whitespace
4.2: I see a vertical scroll bar in the middle of my screen on Firefox.
Can you email me a screenshot (email in profile)? There are a few Firefox specific bugs we're working on this week. This may be one of them.
4.3 The black header bar which is fixed and stays on the screen
all the time even though it conveys no useful information to me.
True. Intentional. It can be expanded which reveals the large site map. There are big pros and cons to hiding it. Its an on-going conversation.
However we used to have it disappear altogether and people complained about that too....
4.4: A bunch of text blurbs on the left side of the screen
that convey no useful information to me.
Its a list of Headlines - thats all that is meant to be conveyed.
You say you're trying to be as distraction free as possible,
but that's not actually true because it isn't possible to
have your business model and be as distraction free as possible.
The print preview page is as distraction free as possible.
I'm confused. That doesn't make much sense to me. Yes, I am saying that we intend to be "distraction free as possible" - I'm not sure that I have to add a big asterisk * that covers "within the confines of an ad based business model" any more than I should also add "within the confines of a browser running a web site thats not a book" - I'm not trying to be snarky, just hard know what to make of what you said exactly..
Also - take a look at the ads... do we have them all over the place? Nope - we have them at the end of an Article - not in-between, not embedded, not inline. Thats important.
We are not perfect, but we aspire to continuously improve. Focus is on the user and the reading experience but with recognition that we have to pay the bills for 20 or so editors and journalists across five (maybe more?) countries. (I'm not counting devs, sales, hr etc in that)
2 - On some pages (with full-screen window), some or all of the thumb of the scroll bar is hidden on my browser under the black bar.
3 - As I drag the scrollbar down the size of the page jumps so my mouse pointer is no longer on the thumb.
4 - Sometimes I use the space bar to scroll because my hands are on the keyboard, not on the mouse. It doesn't work unless I click first.
5 - I often tile windows on my machine. This window is exactly width of the left-hand half of my screen (2011 MBP). Because of the responsive design I get a ToC when I visit the home page, which is annoying. It feels like the developers said "all users will have maximised windows".
Other frustrations I've had in the past but can't remember now, perhaps it was changed since last time I tried to use it on the desktop. I just get the feeling that (and Quartz isn't alone in this) the developers tried to re-implement functionality and didn't do it well enough to be worth it.
The fixed header is a complete waste of space. Why is it necessary? It's frames all over again, it's like a web design straight out of 1998. Also, is the right-hand sidebar supposed to contain ads? I can't think of another purpose for it, but I don't feel like turning adblock off for long enough to find out...
There is no side-bar to the right. The site expands to 1400px. After that there isn't anything meaningful to place on the right side (other than for the sake of doing so - but thats just clutter)
Well for starters, there is a huge fixed banner at the top of the page, and a fixed sidebar. If you are trying to read the page in a windows that is not maximised, then you really lose a lot of real estate.
I don't know about cruft, but I just don't like it.
I'm going to try to be a little bit more helpful. At first I didn't really know why I felt a bit uncomfortable on the page, but I think it comes from a feeling a bit lost. My attempts to try to understand why:
1. The pictures don't have any borders, so at first I think a part of the picture is hidden outside the window. I widen the window to see the whole picture, but instead of revealing any missing part the picture just gets bigger.
2. If I continue increase the size of the window until the picture stops scaling, I eventually see the grey area to the right, which give me the feeling of locking behind the coulisses at a theater. I shouldn't be there and see that. Contributing to that might be the lack of shadow on the right side, which makes that border flat compared to the left side.
3. I scroll down to gauge the length of the article before I start reading, and it just continue to scroll and I realize after a while that I'm in some other article and I have scrolled through a whole bunch of them. The boundaries between the articles are very weak compared with the pictures and in particular the thick black fields with the captions.
My personal pet peeve... A huuuuuuuuge banner image pushes the article text below the fold on my 13" rMBP on default display settings in a maximized (by height and width, grr OSX) Chrome window.
Maybe I'm a bit irrational about this, I know scrolling isn't hard, but I came to read your text, dammit.
Edit:
Also, seriously... HUGE thanks for asking!
Edit 2:
Scrolling down I'm noticing that your image height is actually bigger than the height of the viewport (minus the height of your top bar). Even if you disagree with the idea that the image shouldn't push the text below the fold, I hope you agree that the image should at least fit my viewport.
If you want to make it the editor's choice, build a feature that at least gives them visibility into what they're doing. Worst case, just load an article preview in a bunch of fixed-size iframes which match the viewport size of common browsers. Better but more expensive, cobble together a browser farm (getting cheaper now thanks to modern.ie images). Either way, make it a prominent part of the editorial process.
Edit: I think the reason it bugs me is that text is the primary value your site provides. The image is ambiance. Ambiance that blocks me from the value I seek goes from tasteful to gaudy real fast. I get the desire to let editors be expressive, but if you're in a situation where you're forced to prioritize, always prioritize the thing that brought the visitor to your site in the first place. Otherwise you might not have need for the editor at all.
Just to balance out the negative comments, I actually really like Quartz.
- Scrolling down into the next article and having the article list as a sidebar is a neat way to encourage exploration. It makes the site a bit more "sticky".
- I like that ads are unobtrusive and placed at the end of the articles. And unlike nytimes.com, there's no paywall.
- I find the site's overall design crisp and relatively uncluttered compared to most news sites.
On my iPhone, Chrome hides the browser location and other buttons when you start scrolling down. Scrolling up brings them back into view.
On that article, I was completely unable to bring the location/page-controls back into view -- I was stuck on that page, trying to figure out how to escape -- until I clicked a link - then the controls showed up while it was loading the new content.
I say cruft because I'm reading on mobile. I've got a Galaxy Note 2, and I'm using Chrome. It's hardly a slouch of a machine.
First, after clicking the link, I have to watch your spinner for a ~5 seconds. I'm on 65Mbps ADSL - just how much content are you sending me?
Second, the sheer weight of the JavaScript slows everything down. My browser is perfectly capable of scrolling through a page - yet because you've overloaded that, the scrolling is slow, jumpy, and the text renders poorly.
Thirdly, the Note has a huge screen, so I don't mind your static header. If I had a small screen it would piss me off.
Finally, when I switch from Portrait to Landscape, your page jumps all over the place.
Now, compare that to the Mobile NYT page. Yours looks more beautiful, but the NYT is quicker, easier to read, and doesn't get in my way.
You have great content - and an interesting product - but it needs to go on a diet and be user tested on a wider range of devices. I dread to think how it performs on low end phones.
huge fixed banner at the top of the page, and a fixed sidebar.
I see what you mean - but I don't think I"d call that "cruft" but thats my opinion. Thats the Navigation Bar at the top, and the sidebar is a Queue of the Articles in the feed. These are basic page elements.
We get feedback specifically calling this out as good and relatively distraction-free (hence your comment struck me as odd)
Also - the width of the page dedicated to text/images is the same as the NYT mobile site (600px aprox)
But the navigation bar doesn't serve any purpose to me, at least not floating. It has your logo, which I care about, but not once I'm deeper into the article.
It has a search field I will never use. It has social media buttons I will never use (because if I wanted to share this I would just copy/paste the URL). It has a "more" button that I will never use also.
The nav bar is fine at the very top of your page, making it float and scroll with me is just a waste of space.
Floating UI elements should be reserved for critical functionality that's core to the site. Facebook gets a pass because the things on the floating nav bar are actually important. If you really must float it, consider floating to the side - most of us on modern laptops have an abundance of horizontal space but not a great deal of vertical space.
Some more feedback:
- The organization of content is confusing. Just scrolling casually I cannot immediately tell where one post ends and the next begins. The large images aren't a good indicator, since some of them appear to be ads. I shouldn't have to read the tail sentence of something just to determine if I'm looking at the end of a post or not.
- There is a mismatch of expectations. When I go to a link that clearly refers to a single piece of content, I do not expect to keep scrolling and go right into a completely different piece of content.
- I find the photos oversized for their purpose. In this particular piece you have generic-stock-photo-of-Google-employees, which is only superficially related to the topic at hand. It doesn't have business being this huge. It's distracting and keeps me from the actual content I'm here for. Note in the original NYT link the image is also only tangentially related, but it does not overwhelm the text.
It has a search field I will never use. It has social media buttons I will never use (because if I wanted to share this I would just copy/paste the URL). It has a "more" button that I will never use also.
Social buttons are doubly unnecessary in the floating toolbar since you already find social buttons at the end of the article.
Are the users who liked the navigation bar and the sidebar using quartz in a different way from us? For example, are they using quartz primarily to browse rather than following a link to a specific story?
I found these page elements very annoying, and I suspect those who liked it have a different use pattern.
Mobile view can't be disabled, on Android stock browser at least. That means I can't zoom out and read the text at the much smaller size and higher info density I prefer.
I like Quartz in general but usually get tripped up by the scroll bar within a frame. My mouse is normally near the right edge of the monitor, outside the frame.
I can't stand the huge fixed header and I'm sad that I'm seeing more and more of them on websites. If I plan on spending any amount of time on a website that has one, I immediately edit the css to add display: none because otherwise it'll drive me crazy.
It's a waste of space and it makes it much much harder for me to read on my laptop. Vertical space is way too valuable to throw away like that.
Mobile NYT must be one of the very few sites that don't look absolutely horrible when used on a PC browser. I usually hate it when people post the mobile version of a page.
I always wonder how Google manages to have so many vanilla Engineers given how high their standards are. I don't think their standards are high in the "needs to be smart" sense, but in regards to all the other BS. Or they must only ask basic data structure (and puzzle) questions that any studious person can memorize the answers to. I don't think knowing the answer to these interview questions correlates with knowing how to do your job well. It only correlates with knowing the answer to these questions.
Then again, they also have some of the best Engineers. But I don't think that's a testament to a great hiring process, as that could be the result of great marketing. The "we allow you the freedom to actually do stuff and to work with the best" type marketing. From what I read/saw, the great Engineers didn't seem that happy, so maybe those marketing claims aren't really true, but perhaps things have changed since then.
It's basically like SAT prep. With a little prep work a person can greatly raise their chances at getting a great score. Google's hiring practices are well known so a little prep can get vanilla engineers over the hump.
Then again, are there vanilla engineers who would prep in this manner? Just the simple fact that someone preps at all puts them above the 'can't complete fizzbuzz person' ~100% of the time.
There are two kinds of hard though - there's the hard that's hard just because it's boring repetitive work and there's the hard that's hard because it's complex and beautiful and requires an investment in thought and background that not everyone has.
And if you're eliminating the former kind of hard without increasing the latter, then your interviews should get easier. Heck, for people in the latter group it will get easier even if you decrease the former and increase the latter, to a point.
Just because you're not lowering standards doesn't mean your interviews aren't getting easier.
Which is also a thing with exams now I think about it - just because your exam's hard doesn't mean it's worthwhile.
The "structured behavioral interview" he mentions is what I've settled on over the years too. You get a much better idea of what the applicant has really done and is passionate about than seeing if they're good at pop quizzes.
I don't understand people's problem with estimating. It's a useful skill. Perhaps it would be better if the questions actually related to technology, rather than golf balls - but the principle is the same.
For instance - "how many hard drives does Gmail need?" requires a rough guess of how many users Gmail has (if you're interviewing at Google, you should know it's 1e8-1e9). How much space each one takes (probably nowhere near a gigabyte on average - let's say 1e8 bytes). And that the current capacity of hard drives is (1e12 bytes).
Then you can say that they probably need 1e5 hard drives, link it to redundancy, availability, deduplication, backups etc. You can comment that it's feasible to build a datacenter with that many hard drives.
No one cares that the actual number is 12,722 - but you've demonstrated a broad set of knowledge about the current state of technology. Saying "dunno - a billion?" is not going to get you anywhere, and with good reason.
I used to think estimation questions were useful. I still think that estimation ability is something a programmer needs for exactly the reasons you state.
However, I used to ask one estimation question (How many hours have you spent coding over the course of your life) on all my interviews. Over time, I lost interest in it because almost everyone got it "right" (took an acceptable route and arrived at a reasonable estimate). The only people that got it wrong were ones I decided to reject for other reasons (this being a semi-technical but mostly ask-about-experience phone interview).
So, although I agree it's a useful skill, I don't think it's worth askin estimation questions from my personal experience.
I had a professor who used to say: a great engineer should be able to answer any question in 30 minutes, at some level of precision. Her example was deriving the equations for the dynamics of the space shuttle. You should be able to do it in 30 minutes, even if it's an extreme approximation.
Even if this idea had some merit, I think once the candidates start preparing for interviews by reading some idiots guide to brain teasers then it loses it purpose and further hurts the candidate who is actually skilled, quick on his feet and has good instincts.
In which case this is lifted in part out of the McKinsey interview play book. A behavioural interview and a case study, with a resume review frost up. Keeping the case studies consistent across a set of interviewees for the best calibration. Making them realistic problems rather than academic exercises or quizzes means multiple paths can be taken to a variety of right answers.
Laszlo and many others inside his area are ex McKinsey. (As am I)
As a developer I've never heard good things about McKinsey. Can you tell us your perspective on their value proposition and the relevance of their hiring methodology?
The best I've heard about them is that they provide political cover for executive agendas that might not play well absent validation from a third party with some academic credentials.
McKinsey don't operate above the radar, but are responsible for helping with an incredible number of major corporate decisions across the world.
Recruitment and development is McKinsey's core advantage. They attract and retain incredible talent from all sets of places.
Teams work with the top clients - the CEO and team as well as high flyers at more junior levels. Things get done extraordinarily quickly. And well. But make sure that the internal team is up to scratch, and that there is that mandate from the top.
There is no prescription for how to deliver a project, beyond the hypotheses approach.
There is an obligation to dissent, and a client-first mandate. When enforced well the client gets what they need to hear, not want they want to hear. High quality consultants don't want to work on projects justifying dumb decisions - and can choose not to.
Use McKinsey and other consultants to help with new issues, not business as usual. They are fantastic for quickly understanding and assessing important questions of strategy, for mergers and acquisitions, organisational design and so on. They help understand the context and set the new agenda, or validate the old one. Generally clients simply don't have enough internal capacity to perform this work alone.
Use them, decide what to do and start doing it - and then get rid of them. Consultants that are camping at a client are in effect wildly expensive employees.
The other nice part about being able to estimate- you can often determine feasibility with a reasonable degree of confidence, for a minimal investment in effort.
The problem is, the interviewers often judge how accurate your estimation is, and not the fact that you know (the highly flawed) Drake Equation.
These estimates are completely useless in real life, because in real life nobody guesses how many drives you need for GMail, or how many gas stations there are in LA.
Just yesterday someone asked me whether a particular disk array would be the right size or not. I didn't have any particular number in mind beforehand, yet I was able to say that the proposal was oversized by a factor of five. If someone told me they had a sweet new application for gas stations in California, I would be ready to figure out how much money each installation would need to make to cover salaries for a programmer and a DBA, even if it's just a hallway conversation. It's OK to be a bit off; it's not good to have no idea.
Your point that interviewers read the wrong signals from candidates' responses is a very good one, but it's not specific to estimation questions. It applies as well to straight-up programming questions and probably a lot more.
These estimates are completely useless in real life
Oh yes they are, for getting a grasp on what real life entails and what's possible.
With all the NSA scandal/hysteria going around, lots of people are approaching the issue with the presumption "gee, they can't possibly record everyone's phone calls". With a quick estimate I figured recording everyone, all the time, in CD quality, would take just 5% of the federal budget - making it doable instead of improbable or impossible, and making subsets of the scenario (i.e.: just recording phone calls) likely. For those of us who remember 10MB hard drives and 5.25" floppies, such a data scale is staggering - but it's a current reality, and a little estimating provides a reality check.
Likewise grasping the concept of, or even implementing, high-res "eye in the sky" drones. Gigapixel cameras seem like a novel futuristic impractical concept ... but then a little estimating involving HD-quality cell phone cameras, you can realize that a 24/7 flying 30fps gigapixel camera drone is in fact quite possible for a relatively modest sum (speaking in jurisdictional law enforcement budget terms). (Takes less than 200 cell phone cameras and a suitable multiplexer & high-bandwidth downlink BTW.)
I had an epiphany about accounting (!) when touring a billion-dollar timeshare (hotel/condo) project. Wanna make a billion dollars? Pick some large expensive project, then estimate your way down to a plan for pulling a few dollars out of a LOT of wallets by dividing, dividing, dividing away into manageable chunks people are willing to shell out a few bucks for.
Such estimates are exercises in how to mentally manage very large scale money, personnel, opportunities, and processes. Wanna make a billion dollars? Charge a buck profit per window to wash a thousand windows for each of a thousand businesses every week for 20 years. Don't laugh, there's some really rich people who made a lot of money charging a buck at a time - because they estimated their way into a profitable vision.
In my experience, the rare quality is not the ability to do these estimates, but the ability to recognize situations where such estimates might be valuable. Most educated people can come up with reasonable estimates if posed the question, but far fewer realize when the question is worth posing.
Imagine a world where car2go does not exist yet. I would expect most of my friends to be able to roughly answer the question: "how many cars do you need to start car2go in City x?", but only a handful, if any, would have the imagination/insight to realize the potential and ask that question in the first place.
I'm not sure where you've worked, but doing resource estimation for projects has been pretty important for most greenfield projects I've worked on.
It's also good for sanity testing, it's a useful skill to be able to spot that something is out by an order of magnitude as it can allow you to catch problems early on.
Perhaps these kinds of questions would be met with less "wtf" looks if they were asked in reverse, as in: "My manager ordered 10 000 000 hard drives for GMail. Do you think we'll need them?" It's much easier to judge an estimate when you see it than to come up with it (especially in an interview where the emphasis is usually on whether you're right or wrong), at least for me.
The underlying data might be different but the process is the same, you need to figure out what are the contributing factors, how they relate and establish an upper and lower bounds for the values you're assuming.
Once you have data you can make corrections to those bounds, but other than that the process is the same.
It's a skill that a lot of first time startup founders lack. They have no-idea how to estimate the market size for their startup, you need to understand the process of how to build an estimation model.
It sounds like you build estimation models by looking at the data you have and combining it together to try and figure out your goal.
The disadvantage with that approach you often end up missing factors (because you don't have the data to hand) and end up with a suboptimal model.
In the same way that a lot of startups end up analyzing user behaviour by page analytics rather than user analytics simply because Google gives them page analytics.
It's a good idea to know how to do both top-down and bottom-up estimation models, as best practice is to make estimations using several different models and compare the results.
In every situation I know of, it's not the accuracy of the answer that is being judged but rather the thought process and basic math and assumptions chosen to get there.
In any case it's one of those things where you tend to do well if you've prepared for it and do pretty poorly if you face it for the first time. Probably a better fit for interviewing management consultants than programmers for sure though
> These estimates are completely useless in real life, because in real life nobody guesses how many drives you need for GMail, or how many gas stations there are in LA.
Do you never do sanity checks? If you want to compute the number of drives you need for GMail, you estimate, do the computation, make sure they agree, and then have someone check your work.
You're a startup in the cheaper-fuel business. You work on the assumption that most of your users will be mostly going around a single city's metro area, which heavily impacts your UX design.
You now need to make decisions about UI (map or list? what's a good default map zoom level?) and infrastructure (how much gas station data will a single user need in a single request? how does that impact my storage?) and a whole lot of other places.
You need a nice representative metro area that first a realistic worst-case scenarion, say LA.
It's a crutch. Nobody knows how to interview. Interviewing properly is a lot of work. There are two people who can do interviews--people who have knowledge of the job and people who have time to interview--and they are so infrequently the same people. These sorts of things were appealing because they were easy, a way to not spend a lot of time on interviewing, or a way to not need a lot of knowledge about the job.
And these things are important, because job candidates are not people, they are OEM replacement parts being order from Pep Boys. Call up the recruiter and requisition a J6-252: Programmer, seasoned 5 years, with degree from MIT. Oh, those ones are too expensive. Guess I'll take the knock-off version, but I refuse to pay full price!
Hopefully, because it's Google saying it, everyone will cargo-cult on this bandwagon too.
From the original New York Times article that Quartz has linkspammed here: "On the hiring side, we found that brainteasers are a complete waste of time. How many golf balls can you fit into an airplane? How many gas stations in Manhattan? A complete waste of time. They don’t predict anything. They serve primarily to make the interviewer feel smart."
Long before this was reported in the New York Times, this was the finding of research in industrial and organizational psychology. A valid hiring procedure is a procedure that actually finds better workers than some different procedure, not a hiring procedure that some interviewer can make up a rationale for because it seems logical to the interviewer. We have been discussing home-brew trick interview questions here on Hacker News for more than a year now.
Brain-teaser or life-of-the-mind interview questions do nothing but stroke the ego of the interviewer, without doing anything to identify job applicants who will do a good job. The FAQ on company hiring procedures at the Hacker News discussion linked here provides many more details about this.
REPLY TO UPDATE SELF: Yes, the correct term was actually "blogspam," and I appreciate (and upvoted) the grandchild reply that pointed that out. I see that now the Hacker News curators have changed the link on the story submission from pointing to Quartz to pointing to the original New York Times article, which fits the Hacker News guidelines.
(I mention this because many comments in this thread will be very confusing to newcomers if it is not made clear that the thread used to point to Quartz but now points to the New York Times.)
I find that a bit disingenuous on the part of the HN moderators.
I still maintain that the piece was original. It was based on (and expanded on) one of the eight points made within an NYT article.
To me "In Head-Hunting, Big Data May Not Be Such a Big Deal" could not describe the same article as "Google admits those infamous brainteasers were completely useless for hiring".
Even the headlines indicate two separate directions.
Please read again and compare this (relevant passage on NYTimes will be Highlighted):
Since your question is serious, here's one serious answer. The article is really two (or three, depending how you count them) pieces of blogspam spliced together with a linkbait title on top. So while it's true that it doesn't draw exclusively on the NYT piece, that doesn't make it original, because everything else is cribbed too. Isn't it?
blogspam also has another meaning, namely the
post of a blogger who creates no-value-added posts
to submit them to other sites.) It is done by
posting (usually automatically) random comments
or promoting commercial services to blogs, wikis,
guestbooks, or other publicly accessible online
discussion boards. Any web application that
accepts and displays hyperlinks submitted by
visitors may be a target.
http://en.wikipedia.org/wiki/Spam_in_blogs
So, while that doesn't address you argument lets call a spade a spade. You may not like the post but that doesn't make it "blogspam".
(You could argue that terms evolve and thats fair enough.)
Second, the headline "Google admits those infamous brainteasers were completely useless for hiring" doesn't strike me as a typical "linkbait title".
Regardless - I don't think I'm going to be able to convince you - but I did want to understand better.
Maybe its ironic then that the NYTimes links to our post from here...
From the original New York Times article that Quartz has linkspammed
No, this is Linkspan:
Link spam is defined as links between pages that are present for
reasons other than merit.[9] Link spam takes advantage of
link-based ranking algorithms, which gives websites higher
rankings the more other highly ranked websites link to it.
These techniques also aim at influencing other link-based
ranking techniques such as the HITS algorithm.
"On the hiring side, we found that brainteasers are a complete waste of time. How many golf balls can you fit into an airplane? How many gas stations in Manhattan? A complete waste of time. They don’t predict anything. They serve primarily to make the interviewer feel smart."
A similar equivalent in coding interviews would be "what does obscure function/feature do?"
There are questions that are actually fun and I can sort of see them starting a conversation with the right kind of interviewer that tells both parties a lot about who they're dealing with. From the article:
> How much should you charge to wash all the windows in Seattle?
Basic economics estimating - probably not that useful and a bit dull, but hey why not. At least the problem has several angles to it that might be fun to explore.
> Design an evacuation plan for San Francisco
That's a nice one. Kind of open-ended, a lot of things to consider, a lot of ideas to be had.
> How many times a day does a clock’s hands overlap?
Why? What happens to the interview after you counted them (possibly on a whiteboard)? It's a dead end and the question is dull.
> A man pushed his car to a hotel and lost his fortune. What happened?
Now this has the potential to be great or absolutely horrible, depending on the intent behind the question and the nature of the interviewer. If it's taken as a "fill in the blanks" kind of challenge it would be a fun way to explore the candidate's imagination. But I'm guessing it's not. It's probably one of those "clever" questions that have only one "right" answer that makes no real sense except creating a few moments of uncomfortable silence.
> You are shrunk to the height of a nickel and your mass is proportionally reduced so
> as to maintain your original density. You are then thrown into an empty glass blender.
> The blades will< start moving in 60 seconds. What do you do?
Again, this could be a fun physics and chemistry question and I see a couple of possible solutions that might or might not work out - might be fun exploring them. But again, it sounds more like a trick question with one standardized answer. Bad.
The problem with trick questions and standardized answers is that the nature of the question makes the candidate uneasy and even if they eventually figure it out, nobody will have learned anything during the process. It's more like a hazing, not a hiring interview.
Yeah, but why? Nobody should be surprised that familiarity with Monopoly rules is a bad predictor for job performance (and I'm not only saying that because I suck at these).
Simple, the point is to boost the interviewers ego, nothing more. This example is the kind of mean-spirited bullshit that leads me to loathe brain-teasers.
A horrible question for many reasons. It requires not just knowledge of Monopoly, but instant recollection thereof. It says more about the questioner's limited world view (presumes everyone knows about X, where X is irrelevant to the job) than the interviewer's intelligence. It is a learnable answer: skim How Would You Move Mt. Fuji? and similar "clever question" books and you can recall the answer rather than deducing it (the latter being far more important to the job). The worst part, I think, is the automatic dismissal of any creative & applicable "wrong" answer; before seeing the "Monopoly" reference, I was imagining some despairing ex-executive cashing out his life savings, putting it in the car with a can of gasoline, pushing it down a hill into the offending hotel and watching it all go up in smoke ... but because I didn't say "Monopoly", no credit for creativity etc.
On that last point, I recall interviewing at Microsoft: Asked questions about automatic control of venetian blinds, for one question I knew I was missing some obvious simple checkbox-type answer. I told her "I know I'm missing the obvious here, so I'm just gonna pick some alternative solution and talk thru it so you can see how I think" and proceeded to elaborate on a complex yet viable & marketing-impressive implementation. Wasn't gonna let some "correct" answer stand in my way...
Oh, that's what I was supposed to get out of that question? I was probably 5 the last time I played the actual board game. I've played computer versions since then, so pushing the car wasn't even a verb that I'd use to describe the game play. The only interesting aspects of the game for me are the studies about game balance for particular properties. I never aspired to be a great monopoly player, and that shows when I'd play my friends. As an interview question, this one seems pretty lousy.
Not necessarily (and actually that would be a terrible answer. The highest rent in Monopoly with a hotel is $2000, and I would hardly call that a "fortune".
The point of the question is to establish how well you gather additional information when the initial description is unsatisfactory. Anyone who's been an engineer knows you have to do this every single day.
Monopoly was first published in 1935 and the prices haven't increased since. That makes the cost of a stay at the most expensive hotel $34,000 in today's money, which is far more than I'd ever pay to stay at a real hotel...
Ignoring inflation, you can compare the price of staying at that hotel with buying the land it's built on for a relative estimation of the cost.
It seems like a hazing but perhaps it's a bias on the part of the person interviewing. Meaning, the interviewer wants to work with people very much like themselves, instead of hiring the best candidate.
I think Vaughn and Wilson had the best answer to the blender question in the Internship trailer I saw... just lie down. those blades will just spin over your head and the motor will eventually overheat and fail. I'm sure there is something about densities they are looking for, but it doesn't strike me as a great question.
Manhole covers being round is more interesting, but not really a good indicator about how well you can code.
I've never seen any citation that Google ever used these kinds of question. Especially the idiotic one about pushing a car to a hotel. I think it was just an urban legend and a good piece of linkbait.
There must be thousands of people on HN who interviewed at Google over the years. Did anyone ever get a question like this?
I've interviewed with Google a couple of times over the years and the interview questions are usually of the form "you have X data in such and such format, and we want to answer question Y as efficiently as possible." Then you have to puzzle out the algorithm, answer questions about its asymptotic complexity, and then whiteboard the code.
Sometimes the questions feel a little bit gimmicky in that they aren't really representative of what software developers spend most of their time doing. I understand that one guy at Google had to come up with a clever algorithm to figure out a snippet to use in the search results, but that can't be what the thousands of developers at Google are doing all day.
> "On the hiring side, we found that brainteasers are a complete waste of time"
That implies Google has some data to back it up, whether they themselves previously asked those types of questions, or they derived it from some other means. Either way, they aren't doing themselves any favors to dispel that urban legend. Most reading that will just assume they used to ask brainteasers, but no longer.
Yes, it's a rather strange objection to make... How could Google possibly use data on its employees to disprove that brainteasers work - if they weren't using brainteasers at some point?
I think brainteasers were used in PM interviews fairly early on (I want to say 2004-2007, but I'm not sure on the exact dates). I've met people that were familiar with the brainteaser-type interview question, but such questions were banned before I started in 2009.
I think these are used more for PM interviews. I interviewed there once and was asked to design an evacuation plan for Manhattan (not San Francisco). I've also heard from employees there that such questions were preferred for PMs rather than engineers.
When I interviewed at Google 5 years ago they weren't using those brainteasers.
There are many posts online about the actual, CS-y questions that you can expect in a Google interview, I had just assumed that the mentions of brainteasers were merely urban legend.
I've interviewed at Google. Years, years ago. I didn't get the job. Similarly, no brainteasers, but something worse: they made me write syntactically correct code on a whiteboard. I have never written code without using a keyboard; turns out, I just didn't have the neural pathways for anything else. My brain kinda seized up. I specifically recall failing to recognise the fibonacci sequence (especially horrifying given that I read mathematics at Edinburgh). Things went downhill from there.
Ever since, whenever I've interviewed someone, I ask them to demonstrate their strengths to me first.
> Similarly, no brainteasers, but something worse: they made me write syntactically correct code on a whiteboard
Interestingly, I believe Google are slowly moving over all their coding interviews from whiteboards to Chromebooks - this is what I was told by my Google recruiter when I last interviewed with them, anyway.
The whiteboard can be a bit polarising...I love whiteboarding code, but I suspect many people detest it with a passion (I used to teach CS, so it's something I picked up on the job). I do think it is rather unfair to have candidates whiteboard and demand syntactically correct code, especially when under pressure. There's room for flexibility.
"I ask them to demonstrate their strengths to me first"
Nice. That agrees with some of the other comments here. For example, about asking about past work or projects that they are proud of or that demonstrate their skills.
Another hard issue is what to do, as an interviewer, if things start to go downhill to the point where the candidate becomes flustered and you can tell they're not at their best.
It's standard (or slightly pretentious) British English; I guess the US equivalent would be "majored in math at Edinburgh" (which would be equally incomprehensible to a Brit)
It's not really pretentious – it kinda depends on what university you went to. I typically say 'studied', but my friends who went to other unis say 'read'. I would take 'read' as a pretentious term.
I think he meant "wouldn't". It's not pretentious per se, it just would be interpreted that way to an American because we wouldn't use that phrasing, therefore we can only imagine it being spoken in an upper-class English accent, pinky fully extended.
Quite so. Having been raised by the BBC World Service I actually do have a somewhat received pronunciation, albeit gently deflected by many years abroad.
The disposition of my pinky, however, shall remain a mystery.
From what I hear, the brainteasers were retired quite a while ago for engineering and technology roles, but persisted in other fields (like account management, sales, etc) for some time longer.
I was contacted by a Google recruiter a few months ago, I had no intention of changing my day job at the time, but for shits and grins I went through a couple phone interviews. The position they were hiring for wasn't an area I have any experience in (the recruiter had made a mismatch), but I thought the questions were reasonable for somebody who works in that field and were kind of fun. They were quizzy, but could be practical. It was a management position so there weren't any coding questions, but things like basic cost estimating that sort of thing.
I had fun and wouldn't mind it again, it didn't feel like a bunch of stupid random brain teasers like I've experienced before (how many t-shirts would it take to make sea worthy sail? why are manholes round?) etc.
"It was a management position so there weren't any coding questions"
Interesting; when I interviewed for a management position (test manager) it was nothing but coding questions, including the infamous "reverse a string" question. ("Would like that optimized for space or speed? In-place, or do I get a buffer? Can you tell I've heard this a zillion times before?") I can understand wanting a test manager to be more than an empty suit, but yoiks.
This topic/discussion reminds me of a movie I saw recently It was called "That guy...who was in that thing". It is a documentary about working actors. Not Big time superstars like Tom Cruise, but the small time 'character' actors.
Anyways, there was one part in the movie where they start talking about auditions. All four or five of the actors they were interviewing for the movie unanimously spoke badly about the typical audition process. Some quotes taken from memory:
"I love acting, but I hate auditioning"
"You've seen my demo reel, you've seen me when I was on Star Trek, you know I can act, then why not just give me the part? Why make me go through this tedious audition process"
"90% of acting is reacting. You can't fully demonstrate your full acting abilities when you're standing in front of a panel of producers 'acting' out a scene that consists of 5 lines of dialog"
What the actors were saying about how they hate the audition process reminded me a lot of my frustrations surrounding hiring during tech interviews. Making an engineer do puzzles like FizzBuzz is a lot like making an actor act out a 20 second scene without any time to prepare or a proper "scene partner" to act alongside of.
I wish I could like to a youtube of the movie, but I can't find one. Its on netflix though.
Making an engineer do puzzles like FizzBuzz is a lot like making an actor act out a 20 second scene without any time to prepare or a proper "scene partner" to act alongside of.
FizzBuzz is self-contained tho, so maybe a better comparison would be to asking for a dramatic poetry reading?
They aren't using the brain teasers right. The Idea is not to create a barrier to entry, nor is it to stress the candidate. The objective of the brain teaser is having the candidates think slow enough that the interviewer can observe how he approaches a problem.
It's hard, when using problems that are common, to really understand how the candidates gets to the answer. Often, he's building on pre solved sub problems he encountered on his professional life, so the resolution process didn't even occur at the interview.
I personally don't use brain teasers, because they stress out valid candidates who do not work well under pressure. However, I think teasers, when properly used, are valid tools in an interviewers toolbox.
Totally agree with this. The essential skill of a software engineer regardless of position is to be able to approach any problem no matter how unfamiliar or intractable and formulate a means of attacking it and verifying the solution. The right type of brainteaser can be a great way to demonstrate this provided: A) The interviewee hasn't heard it before B) It's meaty and not relying on some flash of insight (the manhole cover question is absolute garbage) C) you are able to capture the thought process in sufficient detail, either through verbally talking it out or writing down or whatever.
This has the potential to reveal a certain high level problem solving ability which the lack thereof will not necessarily be revealed by more concrete "write pseudocode for X" type of interview questions. What I mean by that is that there is a continuum of skills ranging from rote copying of solutions all the way through synthesizing solutions to business problems and designing architectures to fulfill a malleable list of requirements. A mediocre engineer can inch their way up the continuum through raw pattern matching ability (which humans excel at) without ever attaining mastery of the high level abstraction that are driving the implementation detail. Such engineers can appear tremendously productive at the ground level, but they are dangerous for an technical organization to have many of them because they tend not to see where technical debt is piling up and can often paint themselves into corners because they're not considering the bigger picture. Knowing someone has strong reasoning skills from very high level human tasks down is a good hedge against this.
I took an i/o psychology course during school and a chunk of it dealt with interviewing and finding best candidates (from an employer stand point and equity standpoint), as lots of people who took the course tend to pursue education with the idea of obtaining an HR-related certificate.
The comment about brainteasers vs structured rubrics is sort of surprising to me, given Google's reputation for quantitative data. Speaking from a very high level, structure was really what was emphasized for interviews. It's interesting how culture can get in the way of proven 'fact,' and I love that Google is using their own (much larger data sets) to make these improvements and in/validate other research
1. Invent a bunch of silly riddles that a non-technical reader might accept as tech interview questions.
2. Pull a major tech company out of a hat (today it's Google), and claim with no evidence that their interviews are based around silly riddles. The article will be cited for years as proof that people working at $COMPANY are weird and obtuse.
3. Wait a couple years. Ignore all evidence that $COMPANY does not use silly riddles in interviews.
4. Once traffic on the original article dies down, write another article claiming $COMPANY has "admitted" silly riddles aren't useful for interviews.
I see you didn't read the article. The basis for the article is a NYT interview with a SVP at Google, claiming that the brainteasers are not useful (among other things). Surely that is good source? I haven't got a clue if Google actually used these kinds of questions but the interview sure seem to suggest it.
The same list of riddle questions has been circulating for at least twenty years. Before Google existed, it was credited to Microsoft. I know they've been explicitly banned at Google for many years, and have seen no evidence that they were ever in common use at either company.
A problem I see with many of these sorts of questions is that they often require the candidate to have some supposedly common knowledge which is not required for the job itself. Cryptic word games surely are much more difficult for a non-native speaker of the language in use. Questions related to facts about cities probably require local geographic knowledge. Surely the evacuation plan for SF must consider the capacity of various bridges? Someone who has lived in northern CA for most of his life would have a much easier time thinking through the logistics of moving people off a peninsula. And, of course, there's the Monopoly question (which I had to Google).
I like estimation questions in general for many of the reasons other commenters have cited. However, I wish those using them would consider the knowledge implicitly required of a candidate.
> Years ago, we did a study to determine whether anyone at Google is particularly good at hiring. We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship.
Can we see the study?
Also note that performance on the job is a noisy measurement, because people who get to work on impactful projects (through luck or people skills) get rated higher than others. I wouldn't be surprised if interview scores were a better measurement of "true" skills.
> I wouldn't be surprised if interview scores were a better measurement of "true" skills.
Possibly, but in a sense "true" skills don't really matter. What matters to Google, ultimately, is Google's opinion of the worker. It's almost certainly skewed / flawed / distorted in some way from the individual's true skills, and that's unfortunate but mostly a fact of life.
Sounds great, although like with any retraction I doubt this will be enough to stop the spread of interview puzzles. Even I'm guilty of asking my share before I realized that the only thing that matters about the candidate is whether they can sit down and start writing code (and the quality of said code).
Google still ask puzzles: they just don't ask brainteasers.
For example, write a program to find every possible word in a given Boggle board is a puzzle, but one you're going to solve by coding...rather than "how many piano tuners are there in New York", which is a rather different matter. I've interviewed on-site with Google several times, and always found the CS puzzles to be challenging but fair.
> Google still ask puzzles: they just don't ask brainteasers.
Isn't the only difference between "brainteasers", "puzzles", and real engineering challenges, just the usefulness of the result?
I get what you are saying though. Asking someone challenges rooted in technology seems so much more useful and natural than something involving ping pong balls and Lake Michigan.
Agreed. I think anyone can be an "A" player under the right conditions. Under different conditions, the same person can be a "D" player. I know I've had jobs were I was the wonderboy who was regarded as an A player all around. I've had other jobs where I was the black sheep "F" player who gets fired after one week of employment.
First commenter's co-founder here:
Sure, you definitely need to screen for culture fit. There are great people who would be bad fits here. That said, we want people who have succeed in most positions they've had in the past. If there are two people who have had 4 jobs in their career, you're way more likely to pick the better if you favor the one who outperformed in 3 of those 4 jobs rather than 1 of the 4. When you combine that with looking closely at their experience as it fits with the role, and their fit with the culture, then you have a complete screening process.
I had a couple questions like this at a couple of interviews more than a few years back now. In both cases, I sat for a minute, and asked a few questions back, like "do you mean the city limits of Raleigh, or the metro area?", "how do you define gas station - do we include public-only, or private fueling places?", etc. Part of this was buying some time, because the question caught me off guard, but I think my questions back caught him off guard a bit too.
That interviewer told me I was the only person who asked clarifying questions before blurting out an answer or walk through. Another one was "take this marker and design a house on the whiteboard for me". So I took the marker and asked questions like "how many people will live here, do you want one or two story, do you need a garage/shed/basement, etc?" And again, was told I was the only person who'd asked questions before starting to draw.
I don't think the intention behind those brain teasers was necessarily to determine how you react to those sorts of problems, but it may have been a useful determining factor for some interviewers nonetheless.
Every time I click any link on HN that points to qz.com I get QZ without any reference to the article in question. Currently it points to "Why Tesla wants to get into the battery-swapping business that’s failing for everyone else"... in Chrome. Firefox seems to work. Terrible website.
Knowing how to quickly estimate something is useful.
I imagine that Larry Page does a few quick estimates every day. How many Loon balloons would it take to bring Internet to 90% of Africa?
But not everybody at Google has a job like Larry Page. It's gotten to be a big company full of accountants, HR people, and other jobs that don't require much thinking in unfamiliar territory.
In other words, guesstimation is a useful skill, but not for every Google employee, so it's not going to show up as useful on average.
Some of the more flippant sounding ones could be useless but I thought the idea of the simpler ones (how many golf balls etc.) is to get a feeling for how people's minds work and whether they can make sensible best guesses in the abscence of concrete facts and make judgements based on those guesses. Weed out the ones who have no appreciation for how the volume of a golf ball relates to the size of a bus.
Good logical thinking shown here could indicate an ability to rapidly prototype systems without getting hung up on too fine detail.
On the other hand, demonstrating evidence for being able to rapidly prototype systems without getting hung up on too fine detail also indicates an ability to rapidly prototype systems without getting hung up on too fine detail. And it does it much more directly (i.e. there is an obvious link rather than a tenuous at best one) and with much less stress, awkwardness and mind games.
Insofar as this interview speaks to the relevance of brainteasers to actual software development / engineering, it fails to provide a meaningful topic of conversation. It surprises me that nobody's pointed out that at best the conclusions are relevant to engineering "leadership" performance, rather than -- as I expected for "Google" and "head-hunting" -- coding performance. Sure, people skills and team skill are important, but if you're going to get good at selecting for leadership and ignore selecting for productivity, to the extent they're not related you're not going to be very good at creating and maintaining software. Although software isn't 100% of Google's success and coding productivity isn't 100% of software success, it's pretty important.
Why are they talking about "Big Data" rather than just "data"? I doubt the data sets they used were so large that they could not be easily analysed on a cheap laptop using normal statistical packages.
When trying to work out what best predicts job performance, the quality of your data is by far the most important thing to focus on. I would very much like to know more about the details of their internal studies. There are a lot of difficult problems in trying to use statistics to improve interview processes. One of the big problems is that you will always have a truncated sample of only those people who were selected: you would then expect the importance of certain variables, such as GPA or test scores, to be lowered because those who scored lower on such metrics will have had compensating characteristics...
Huh? Google's reputation in shreds? How so? If you're referring to NSA issue, that's the American government's reputation you're talking about. Google (and almost every other big tech company) was simply compelled to follow the law.
If you're a hiring manager who uses these things, you should know that I (try) to train/prepare my students to answer them. I do think there is some utility in watching how people approach an unconventional problem, but don't be too impressed with people that can solve them easily, compared to those who don't do well the first time they see them. I see a huge improvement in the quality of answers of most students, once students know it is a gag and once they've been shown how to estimate things. Most students are constrained by having been in a learning environment that provides them with well-defined boundaries within which to form their answers. IMO failing to perform well with these problems is not always a failing of the student as much as it is their educators.
"It’s also giving much less weight to college grade point averages and SAT scores"
In 2004 I interviewed for a Creative Maximizer position. I received a glowing review from my brother who was a Googler. I studied all the ins-and-outs of adwords back then and the British interviewer confirmed: "You did very good on the assessment" (which was working through real ads that needs to be maximized). My opinionated experience has been that in these kinds of situations, Brits embellish less than Americans.
However, she told me that my college GPA was "a major question mark" because it was 2.99 and Google only hires people with 3.0 and above (I didn't know what I wanted to do in college). Looking back I'm glad I was never hired, but that burned me bad for a while.
The only true way to tell if an interviewee will be a good employee is actual work/product output with the right amount of responsibility. Product focused people not just coders looking to code lots of tricks to compete.
Contract to hire is one way, another is what they have done previously as a good predictor. It is a risk for sure but that really is the only true way in the end.
Plenty can be gained from just letting the interviewee talk and maybe looking at some of their code they have done previously while they talk about it. Whiteboard coding should not apply as it is completely out of element for many coders.
The type of person they are can't really be detected correctly until they are in the team and delivering because everyone is selling themselves on an interview.
I think companies would be better off hiring people not based on their IQ or skill level but by hiring people who love what they do, have done side projects and achieve flow in their work. People who achieve flow in their work will work harder and are more creative than others because they enjoy the process of solving problems. So the interview process should be to identify how often the given candidate achieve Flow (as defined by mihaly csikszentmihalyi)
>> After two or three years, your ability to perform at Google is completely unrelated to how you performed when you were in school, because the skills you required in college are very different. You’re also fundamentally a different person. You learn and grow, you think about things differently.
While the analysis is correcting some beliefs about interviewing techniques, do I sense them draw a conclusion again not supported by data? How did they conclude the lack of correlation is "because" the skills required are different and people think differently a few years out from college.
This is great news both for Google and the candidates. Of course as long as the behavioral indicators for the competencies will be defined right - according to actual goals and tasks on the job.
So how useless exactly were they? As long as you are looking for a "right" answer not a correct one, they are a very good metric for testing problem solving skills.
I got off to a real bad footing in a job interview using a questions like that once. With a guy looking for a 'right' answer, and it didn't go down well when I challenged his assumption.
He asked how many plumbers worked in the city, to which I replied you could check the industry registry for qualified plumbers, you can probably filter them out by city. There was silence then I had the question clarified to how many 'plumbing businesses' where there not individual plumbers.
To which I replied you could get the company registrar office but it was impossible to calculate as so many plumbers work full time while also holding businesses of there own as free agents. A very unimpressed look came across the guys face and I was told there is a very simple way to find out and asked to try again.
I sat in silence for a 30 seconds or so trying to think of something that would be more thorough than the registry offices, I think offered a few alternatives like tax department records, government statistics office. All things I could think of that would keep fine grained data. But I could see the guy growing impatient with me so I stared at him and asked him what a better metric was than what I had offered.
After a few moments I was told the correct answer was to check the phone book, any practicing plumber business would be listed.
Startled but what seemed like a completely faulty answer I pointed out what seemed obvious to me... not every business needs to have a public listing... some deal directly as sub contractors ... some could be umbrella companies for subbies ... again some are free agents... some might use unlisted cellphones... not everyone is a legal company, not all plumbers where qualified. It was a terrible way to get a dataset you could rely on.
Angry swept across the guys face and I was told sternly I was wrong the data was perfectly suitable, onto the next question... which was all down hill from there as he didn't want to hear my answers, didn't challenge me back, just rip through the rest.
To this day I laugh when ever I think back to that interview. It was probably the most uncomfortable interview I've ever been in.
So, is Google admitting the questions are hopeless, or are they saying that their interviewer's reactions to the answers to those questions are hopeless?
Because fixing interviews is harder than just working out hwat questions to ask.
I dunno about google, but my experience was more copy cat behavior by someone that didn't get the purpose of it I think... maybe I was to blame as well as I pushed back expecting to be challenged more.. not just told I was wrong.
It ended up worse then useless for both of us involved.
Dodged a bullet, anyone can see your answers were at least a good as the phone book one. Nothing worse than a managers who relies on authority to backup their flawed decisions just to spare their own ego. Sounds like a toxic org culture
If a candidate solves a puzzle, it tells you a bit about the candidate, but if the candidate fails to solve the puzzle, it tells you more or less nothing, and a large part of the interview is wasted.
This always seemed so overhyped to me. I did hundreds of interviews at Google and I never once asked anyone a question anything like the ones described. It was generally stuff like "oh hey, you're going to do deep work on our unix systems? What is the difference between kill and kill -15?" We also didn't care about GPA. This all seems like super old information if it was ever true at all.
I'm still a student and like to interview at a lot of places, shop around, and keep practicing my interviewing skills.
I STILL go to interviews where I am ONLY asked these kinds of questions...It's embarrassing. If you ask me these questions for a 2 hour long interview then I'm not going to work for you...it's that simple
Why so serious? Isn't hiring about maxing out the potential of a company?
Anyone can help maximing it out. I know for myself that having 'clue' reduces self-esteem. Which can be balanced by having the right co-workers. End result: Maxing out the potential.
Fascinating use of "Big Data" to cut through the bullshit. Wonder if it will change anything. I suspect the "tough" interview plays well into a company's PR.