Hacker News new | past | comments | ask | show | jobs | submit login

It's not particularly difficult to devise a personality exam that way - in the best case scenario, the candidate cannot game the test, and in the worst case scenario, they game it but the proctor is fully aware of the gaming, so you simply reject the test (or the candidate).

Instead of trying to get rid of questions that are gameable, the psychometrists design the test with statistical consistency - things as obscure as, say, what color the candidate answers in a multiple choice question out of orange, yellow, blue and red can be used to correlate with other questions (not literally, but to a candidate it would be equally obscure and seemingly innocuous as a question). If a candidate answers one question in what they perceive to be the societal ideal, this will be exposed in other questions where it's not possible to cross-consistently game the question unless you've read and thoroughly understand a manual of psychology or psychometry.

Tests designed this way allow for a certain amount of "gaming" by candidates before it crosses a statistical threshold, at which point it essentially tells the proctor, "The answers are so inconsistent that the candidate wasn't being honest." and you have to throw out the whole test (which in the context of hiring, means rejecting the candidate).

The reason why this works is because while people will answer "Would you consider yourself hard working?" with "Always" or some other unrealistically gamed superlative, they won't realize that other questions that are seemingly unrelated are highly correlated with that quality - if you answer yes to one and no to another, you probably lied on the transparent one, with high confidence.

Tests like this[1] have been around for so long that they consistently get better, though there are some valid criticisms of them being easier for non-minorities (for a variety of reasons). They are used in clinical and professional contexts, and while the quality personality tests will theoretically be consistent across multiple-test takings (in other words, results are relatively immutable), in practice you should probably avoid giving one individual a lot of exposure to the same exact test twice.

Hope that helps.

[1]: http://en.wikipedia.org/wiki/Minnesota_Multiphasic_Personali...




I'm a psychologist, and I game personality tests for fun.

The MMPI is harder than the Myers Briggs, but not that difficult. Additionally its really only optimal for clinical samples, at which it is very good.

However, everyone in the field knows that these tests can be gamed, the only open question is how many people game them consistently.

You can probably estimate the proportion of social desirability exhibited in job interviews by comparing to non-job situations (such as a sample matched on all relevant covariates (whatever they are) from the general population.

Nonetheless, believing in personality tests as an accurate indicator of personality is as misguided as believing that Facebook represents the social graph of all its daily active users accurately, i.e., somewhat misguided.

And I am aware of lie scale, and they are trivial if you actually read the question. Protip: If a question says always or never, its probably designed to trip you up.

I do agree that personality tests are more accurate than this thread makes them out to be, but they are certainly not as useful as your comment implies.


Wow, I have to defer to you then :)

Is there anything you've got offhand that I can read further about this? I didn't know you could actually game them.


All self-report data is inherently flawed, which is why you can't rely on one kind of data while trying to learn something about personality.


Find the scoring manuals, do loads of personality tests, rinse, repeat. Its not particularly difficult.

Despite this, I have ran many surveys back when I worked in academia. You can detect some of this stuff with Guttman errors, but these are not often used, and as long as you are consistent, its very difficult to spot.


> The reason why this works is because while people will answer "Would you consider yourself hard working?" with "Always" or some other unrealistically gamed superlative, they won't realize that other questions that are seemingly unrelated are highly correlated with that quality

Can you provide a deeper example? I'm honestly curious - what sort of question is innocuous enough to be answered honestly, yet useful enough to provide information? (i.e., do slackers like the color yellow or something?)


This is actually part of the so-called "bogus pipeline." Test-takers are told that the test can detect any attempt to lie, causing them to answer the questions more honestly than they would otherwise. The MMPI does detect inconsistent and unrealistically extreme answers, but it's not quite as foolproof as claimed above.

http://en.wikipedia.org/wiki/Bogus_pipeline


Here are some:

http://www.physicschick.com/pole/2012xxxx/psych.txt

By googling those, you can go deeper into the rabbit hole.


Okay, that is the MMPI. That is not evidence that any test that is not the MMPI cannot be gamed. The MMPI (and I disgree with it in so many ways) was researched and tested for decades, and perhaps they can detect lying. Perhaps. I don't buy that any lesser test cannot be easily gamed absent a heck of a lot of experimental evidence for that specific test.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: