Hacker News new | past | comments | ask | show | jobs | submit login

How should I use your link to reach this conclusion despite the tracks I gave?

(also note that "ridiculous" is quite strong and disrespectful)




You can't tell me you weren't convinced by the link to a wikipedia page. Not even to an argument on the wikipedia page, but just to the whole-ass wikipedia page.

https://en.wikipedia.org/wiki/Unicorn


:-)

I shall start sharing the whole domain to answer anything. With a bit of luck, something will address the discussed concern.


The link demonstrates that there is a well-reproduced phenomenon in real science whereby, e.g., test scores in various academic subjects correlate positively with each other, and that this can be explained by a common psychometric factor that is reasonable to refer to as "intelligence". The IQ or "intelligence quotient" is an attempt to quantify that which is known to exist, and it's actually one of the best understood ideas in the science of the brain.

Additional viewing that largely covers my points below: https://www.youtube.com/watch?v=jSo5v5t4OQM

You started off saying:

> There a lot of pseudoscience around IQ too, probably starting with the very concept of IQ for measuring "intelligence" (for which we would need a strong definition anyway)

The point is that we do, in fact, have all the necessary scientific research to argue that the concept of "intelligence" exists - i.e., that we can identify a single-factor quantity that can be fairly described with a single number - and that anything calling itself IQ is definitionally a measurement of that single quantity.

In particular: problem-solving capability is a real thing, and some people very obviously have more of it than others. Also, we notably don't have data to support more than one factor anywhere near as strong as Spearman's g. (That is to say: we see correlations between academic performance in all subjects - rather than strong positive correlations within certain groups but weak or negative correlations between those groups).

The fact that specific IQ tests might fail to actually measure intelligence, or might measure it inaccurately, is beside the point. The fact that an individual's capability to express intelligence might vary on a day-to-day basis, or for other immediate environmental reasons (stress, caffeine, ...) is also beside the point. Any correlation that any researcher might draw between measured IQ results and any other demographic measurement, mutable or immutable, is beside the point, too.

I have routinely seen people who attack the theory of intelligence engage in pseudoscience of their own, such as trying to invent strange alternate "intelligences" like "emotional intelligence" (apparently meaning some combination of empathy and social skills) and "physical intelligence" (apparently meaning some combination of dexterity and proprioception) so as to "debunk" the idea of intelligence being single-factor (which is not even what the theory of Spearman's g asserts; we're only saying that there is a roughly-measurable quantity that strongly positively correlates with academic success). This is, of course, utterly absurd, and further comes across as an attempt to dunk on "nerds" as "not as smart as they think they are" etc. It only makes sense if you redefine "intelligence" to mean something fundamentally incompatible with the accepted and well understood meaning.

And I, personally, have been called a racist elsewhere on the Internet before, simply for pointing these things out, when I had said nothing whatsoever about race. And I've seen it happen to others, too.

It's infuriating, and it's transparently political.

If I "disrespect" people by dismissing claims like "IQ is pseudoscience" out of hand, I will continue to do so, because I have all the evidence I need that the alternative would lead to far greater societal harm.


To be clear, I'm not arguing that the notion of intelligence doesn't exist. Although I'm not sure we really know to define it correctly.

Emotional intelligence seems pseudoscience. I haven't heard about physical intelligence but that seems dubious.

The "IQ is pseudoscience" claim is possibly a bit strong. Now, whether it is a good measure of intelligence is being questioned, and one of the reason is that it has cultural biases and is strongly biased towards academia. It comes from a measure that attempted to assess the mental age of someone (a bit dubious on its own), and you can also train for IQ tests, that alone is a bit suspicious for a good measure of intelligence.

Problem-solving is a real capability, but doesn't IQ mostly attempt to measure pattern recognition? And isn't problem-solving only a part of intelligence? It seems IQ is quite focused on specific aspects of intelligence, and might not even be measuring them very well.

(thanks for taking the time)


> (thanks for taking the time)

You're quite welcome. The initial response was because of how weary I am of seeing similar arguments used disingenuously. It seems like you've put more thought than average behind your words.

> Now, whether [IQ] is a good measure of intelligence is being questioned, and one of the reason is that it has cultural biases and is strongly biased towards academia.

This is not a critique of "IQ", which is a name for "number that says how intelligent a person is". The only actual idea encoded in the concept "IQ" that persists to the modern day is "it makes logical sense to expect to be able to state such a number, and furthermore that a meaningful idea can be encapsulated by one such number". This isn't controversial among actual researchers in the field. Historically, there was also an idea that number could be conceptualized as a ratio (hence "intelligence quotient") of a purported "mental age" to a child's actual age (since initially there was only interest in assessing the intelligence of children). Of course, that breaks down for all sorts of reasons (notably, the fact that people don't continue improving at problem-solving throughout their lives - certainly not well into adulthood, and certainly not linearly) - but the concept was subsequently refined to address that (nowadays, the number is simply a measure of some raw capability which is normalized to fit a bell curve with mean=100 and SD=15 or so to the general population).

Cultural biases and biases towards academia are a potential issue with specific tests used to measure IQ. However, it isn't clear that the demands of those who seek to eliminate those supposed biases could actually ever be met in principle. It's also been reported that attempts to use less "biased" tests (e.g. https://en.wikipedia.org/wiki/Raven%27s_Progressive_Matrices) don't actually make whatever problematic disparities in observed results go away - in fact, they might widen.

Aside from which - if a culture demonstrably doesn't value the traits that we naturally associate with intelligence or expect people to demonstrate the associated skills as a part of daily life, why wouldn't it be correct to say that such a culture makes its adherents less intelligent? Why would it be "biased" to observe such a culture actually having such an entirely predictable effect on people?

> Problem-solving is a real capability, but doesn't IQ mostly attempt to measure pattern recognition? And isn't problem-solving only a part of intelligence?

There's some measure of streetlight effect here, sure; but problem-solving is a pretty darned big component of intelligence IMO. And if your objection to IQ tests is "people can study for the test and get a score that overstates their actual capabilities" then of course it's important to include some metric of problem-solving. (Another big component that you can't "study", but might be able to train over a long period of time, is working memory. For example, one classic IQ test component has the subject listen to a sequence of base-ten digits, then attempt to recite them in reverse order. See e.g. https://en.wikipedia.org/wiki/Memory_span#Digit-span .)

Modern tests such as WAIS (https://en.wikipedia.org/wiki/Wechsler_Adult_Intelligence_Sc...) are kept updated, and include a battery of evaluations on a variety of strongly g-loaded tasks. And yes, they do admit the notion that correlations between these tasks are not perfect - but they have also settled on a consensus that a single number can reasonably encompass the results in general.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: