Hacker News new | past | comments | ask | show | jobs | submit login

I didn’t dismiss anything. I didn’t say anything was alarmist. Please don’t put words in my mouth.

You didn’t say anything about societal problems. You wondered if the growth will ever stop, and I tried my best in good faith to give the reasons why I believe that it will.

If the question is “when will the models be powerful enough to cause societal problems”, then that is a completely different question and I think the answer to it is clearly “they already are”. (But not because they are superintelligent or anything close to AGI.)




> I didn’t dismiss anything. I didn’t say anything was alarmist. Please don’t put words in my mouth.

https://news.ycombinator.com/item?id=35276186

"We’re still decades from AGI in my opinion, and the Chicken Little types ought to pace themselves, is all I’m saying. "


Oh—

I see now you were making reference to a comment I made in reply to someone else.

Yes, that was something that I said and I stand by it.

I do not see any reason to be concerned about AI as an existential threat at the present moment.

I have explained in my previous comment why I feel this is the case; if you feel that this view is recklessly dismissive and wish to change my mind, then I invite you to do the same.

Edit: I’m sorry for any excessive crispiness or combativeness in my tone; I can see now from your post history that you are likely arguing in good faith.

I have grown weary of arguing against concern trolls lately on this topic, so I may have misjudged your initial comment based on its brevity. Sorry about that.


First off: labeling those you don't agree with as concern trolls is pretty rude, but since the HN etiquette requires looking for the best way to explain your comment I take it to be that you meant that as somewhere else rather than on HN. The number of concern trolls here is vanishingly low, most people on HN when they are concerned about something are so for good reasons even if those are not readily apparent to you without further engagement.

As for my own concerns: we have a bit of a problem with this AI thing and whether or not it is AGI or not is immaterial: I judge a technology by the effect that it is having. We have not yet made a dent in dealing with the weaponization of social media, are beginning to deal with the mobile revolution and the internet we now take for granted. Given that that took us a good 30 years to get to this point and that the current crop of AI tools is on the scene for a little over two years it looks as though there is still a very long way to go before we have internalized the changes this technology brings.

And it isn't exactly standing still either, it's a fast moving target that redefines what it is and isn't and what it can and can not do in the space of months. We are now well into what I would lightly characterize as an AI arms race and during arms races the rate of change can go through the roof compared to what it is was before. You only have to look at nature to see many such examples.

And already ChatGPT and similar tools by other vendors are changing the landscape in visible ways. It doesn't have to be an existential threat to be capable of profound and possibly negative social impact. And whether it is AGI or not is also not all that important.

Those cautioning some pacing of the release of these tools are not doing so because they are concern trolls but because they look a little further than just 'hey, cool new tech' to the effect this can have on our societies, some of which are already precariously balanced and have a whole pile of other stuff to deal with. Least of all the fall-out of COVID (which we definitely have not yet dealt with), an energy crisis and a war. And that's before we get into climate change.

Releasing a tool that could easily be weaponized by either side (or both) in such an environment could well have repercussions that we might be able to foresee and help us to decide on whether or not they are going to be beneficial or not. Like all tools this one is dual use, it may help or it may well hinder. Initially social media was a nice way to re-acquaint with family and friends, some of whom may have been lost or out of touch for ages. These days it is a weapon for mass manipulation on a scale that we have not seen before.

Something similar - or far worse - could easily happen with these new AI tools and personally I would like to have the previous crisis before me settled before trying on the next. There is a limit to how much of this stuff we can deal with at the same time and - again, just speaking for myself here but there may be others that feel similar - I am rapidly approaching the limit of how much of all this I can still comprehend and internalize and deal with while still being able to stay on top of it all. It is, in a single world, overwhelming and those that want to pretend it is all inconsequential are - in my opinion, once more, not thinking about it hard enough.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: