Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> and this bubble is going to burst and hurt a lot of people

> But AI is here to stay. And even after the bubble bursts, there will be real uses of AI all around us.

This is, as a ML researcher, exactly where I'm at (in belief). The utility is high, but so is the noise. Rather, the utility is sufficient. The danger of ML is not so much X-risk or malevolent AGI, but dumb ML being used inappropriately. And in general, that is using ML without understanding the limitations and having checks on it to ensure that when hallucinations happen that they don't cause major problems. But we're headed in a direction where we're becoming more reliant upon them and then once we have a few big issues with hallucinations the bubble will burst and can end up setting us back a lot in our progress to creating AGI. Previous winters were caused by lack of timely progress, but the next winter will happen because we shoot ourselves in the foot. Unfortunately, the more people you give guns to, the more likely this is to happen -- especially when there's no safety training or even acknowledgement of danger (or worse, the only discussion is about being shot by others).



> This is, as a ML researcher, exactly where I'm at (in belief).

As an ML researcher myself, I'm glad to see other researchers maintaining a level-headed approach amidst the noise.


Well if you care more about the study than the money (which is nice -- though I'm at the tail of grad school), it makes more sense to chase knowledge than chase metrics. Then again, I don't come from a computer science background, so maybe not having that momentum helps.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: