Makes sense. Everyone here has their pride and identity tied to their ability to code. HN likes to upvote articles related to IQ because coding correlates with IQ and HNers like to think they are smart.
AI is of course a direct attack on the average HNers identity. The response you see is like attacking a Christian on his religion.
The pattern of defense is typical. When someone’s identity gets attacked they need to defend their identity. But their defense also needs to seem rational to themselves. So they begin scaffolding a construct of arguments that in the end support their identity. They take the worst aspects of AI and form a thesis around it. And that becomes the basis of sort of building a moat around their old identity as an elite programmer genius.
Tell tale sign you or someone else is doing this is when you are talking about AI and someone just comments about how they aren’t afraid of AI taking over their own job when it wasn’t even directly the topic.
If you say like ai is going to lessen the demand for software engineering jobs the typical thing you here is “I’m not afraid of losing my job” and I’m like bro, I’m not talking about your job specifically, I’m not talking about you or your fear of losing a job I’m just talking about the economics of the job market. This is how you know it’s an identity thing more than a technical topic.
It's about the flood of poorly made software lowering the average software quality in the industry, which by directly impacting users will lead to more exploits, data leaks, and bad user experiences, which will in turn increase user distrust and frustration, until it inevitably leads to an industry-wide crash similar to the ones in 1983 and 2000, but with far greater consequences.
It's also related to the flood of spam and disinformation on all our communication channels unlike we've ever seen before.
Few people are doubting the capability of this technology, and many of its positive applications. The arguments are mainly about AI companies ignoring the above issues, downplaying the technology's limitations, while artificially boosting any of its abilities, starting with fabricating and optimizing for benchmark results.
The fact we're now promoting and discussing fucking Twitter threads is absurd.
If you see a post that ought to have been moderated but hasn't been, the likeliest explanation is that we didn't see it. You can help by flagging it or emailing us at hn@ycombinator.com.
The other thing, though, is that views differ about how such comments should be classified. What seems like an outrageous "general attack" to one reader (especially if you feel passionately about a topic) may not at all land that way with the rest of the community. For this reason, it's hard to generalize; I'd need to see specific links.
Your anger about his comment suggests that it actually is about pride and identity. I simply don’t buy that most people here argue against AI because they’re worried about software quality and lowering the user experience. It’s the same argument the American Medical Association made in order to allow them to gatekeep physician jobs and limit openings. We’ve had developers working on adtech directly intended to reduce the quality of the user experience for decades now.
> The fact we're now promoting and discussing fucking Twitter threads is absurd.
The ML community is really big on Twitter. I'm honestly quite surprised that you're angry or surprised at this. That means either you're very disconnected from the actual ML community, which is fine of course but then maybe you should hold your opinions a bit less tightly. Alternatively you're ideologically against Twitter which brings me to:
> It's not about pride and identity, you dingus.
Maybe it is? There's a very-online-tech-person identity that I'm familiar with that hates Twitter because they think that Twitter's short post length and other cultural factors on the site contributed to bad discourse quality. I used to buy it, but I've stopped because HN and Reddit are equally filled with terrible comments that generate more heat than light.
FWIW a bunch of ML researchers tried to switch to Bluesky but got so much hate, including death threats, sent at them that they all noped back to Twitter. That's the other identity portion of it that, post Musk there's a set of folks who hate Twitter ideologically and have built an identity around it. Unfortunately this identity also is anti-AI enough that it's willing to act with toxicity toward ML researchers. Tech cynicism and anti-capitalism has some tie-ins with this also.
So IMO there is an identity aspect to this. It might not be the "true hacker" identity that the GP talks about but I do very much think that this pro vs anti AI fight has turned into another culture war axis on HN that has more to do with your identity or tribe than any reasoned arguments.
AI is of course a direct attack on the average HNers identity. The response you see is like attacking a Christian on his religion.
The pattern of defense is typical. When someone’s identity gets attacked they need to defend their identity. But their defense also needs to seem rational to themselves. So they begin scaffolding a construct of arguments that in the end support their identity. They take the worst aspects of AI and form a thesis around it. And that becomes the basis of sort of building a moat around their old identity as an elite programmer genius.
Tell tale sign you or someone else is doing this is when you are talking about AI and someone just comments about how they aren’t afraid of AI taking over their own job when it wasn’t even directly the topic.
If you say like ai is going to lessen the demand for software engineering jobs the typical thing you here is “I’m not afraid of losing my job” and I’m like bro, I’m not talking about your job specifically, I’m not talking about you or your fear of losing a job I’m just talking about the economics of the job market. This is how you know it’s an identity thing more than a technical topic.