Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I get what you're saying, and you're right - I definitely set up a straw man there. That said, employing a bit of imagination it's easy to see how the increasing number of safety rails on AI combined with a cultural shift away from traditional methods of education and towards leaning on them could essentially kneecap a generation of engineers.

Limit the scope of available knowledge and you limit the scope of available thought, right? Being more generous, it looks like a common refrain is more like "you can use a different tool" or "nobody is stopping you from reading a book". And of course, yes this is true. But it's about the broader cultural change. People are going to gravitate to the simplest solution, and that is going to be the huge models provided by companies like Google. My argument is that these tools should guide people towards education, not away from it.

We don't want the "always copy paste" scenario surely. We want the model to guide people towards becoming stronger engineers, not weaker ones.



> We don't want the "always copy paste" scenario surely. We want the model to guide people towards becoming stronger engineers, not weaker ones.

I don't think that these kind of safety-rails help or work toward this model your suggesting (which is a great & worthy model), but I'm far more pessimistic about the feasibility of such a model - it's becoming increasingly clear to me that the "always copy paste" scenario is the central default whether we like it or not, in which case I do think the safety rails have a very significant net benefit.

On the more optimistic side, while I think AI will always serve a primarily "just do it for me I don't want to think" use-case, I also think people deeply want to & always will learn (just not via AI). So I don't personally see either AI nor any safety rails around it ever impacting that significantly.


I can't say I disagree with anything here, it is well reasoned. I do have a knee-jerk reaction when I see any outright refusal to provide known information. I see this kind of thing as a sort of war of attrition, whereby 10 years down the line the pool of knowledgeable engineers on the topics that are banned by those-that-be dwindles to nearly nothing, and the concentration of them moves to the organisations that can begin to gatekeep the knowledge.


I tend to agree. As time moves on, the good books and stuff will stop being written and will slowly get very outdated as information is reorganized.

When that happens, AI may be the only way for many people to learn some information.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: