Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can security people use LLMs to their job? Unlike with building things, all mainstream LLMs seem to outright refuse providing

This one might be a glitch but glitch or not I find it extremely disturbing that those people are trying to control information. I guess we will get capable LLMs form the free world(if there remains any) at some point.



It is nuts to me that people defend this. Imagine reading a book on C# and the chapter on low level memory techniques just said "I won't tell you how to do this because it's dangerous."

"It's better and safer for people to remain ignorant" is a terrible take, and surprising to see on HN.


> "It's better and safer for people to remain ignorant" is a terrible take, and surprising to see on HN.

Noone is saying that - that's your reframing of it to suit your point.

AI isn't the canonical source of information, and nothing is being censored here. AI is asked to "do work" (in the loosest sense) that any knowledgeable person is perfectly capable of doing themselves, using canonical sources. If they learn. If anything this is encouraging people not to remain ignorant.

The reverse of this is ignorant people copying & pasting insecure code into production applications without ever learning the hazards & best practices.


I get what you're saying, and you're right - I definitely set up a straw man there. That said, employing a bit of imagination it's easy to see how the increasing number of safety rails on AI combined with a cultural shift away from traditional methods of education and towards leaning on them could essentially kneecap a generation of engineers.

Limit the scope of available knowledge and you limit the scope of available thought, right? Being more generous, it looks like a common refrain is more like "you can use a different tool" or "nobody is stopping you from reading a book". And of course, yes this is true. But it's about the broader cultural change. People are going to gravitate to the simplest solution, and that is going to be the huge models provided by companies like Google. My argument is that these tools should guide people towards education, not away from it.

We don't want the "always copy paste" scenario surely. We want the model to guide people towards becoming stronger engineers, not weaker ones.


> We don't want the "always copy paste" scenario surely. We want the model to guide people towards becoming stronger engineers, not weaker ones.

I don't think that these kind of safety-rails help or work toward this model your suggesting (which is a great & worthy model), but I'm far more pessimistic about the feasibility of such a model - it's becoming increasingly clear to me that the "always copy paste" scenario is the central default whether we like it or not, in which case I do think the safety rails have a very significant net benefit.

On the more optimistic side, while I think AI will always serve a primarily "just do it for me I don't want to think" use-case, I also think people deeply want to & always will learn (just not via AI). So I don't personally see either AI nor any safety rails around it ever impacting that significantly.


I can't say I disagree with anything here, it is well reasoned. I do have a knee-jerk reaction when I see any outright refusal to provide known information. I see this kind of thing as a sort of war of attrition, whereby 10 years down the line the pool of knowledgeable engineers on the topics that are banned by those-that-be dwindles to nearly nothing, and the concentration of them moves to the organisations that can begin to gatekeep the knowledge.


I tend to agree. As time moves on, the good books and stuff will stop being written and will slowly get very outdated as information is reorganized.

When that happens, AI may be the only way for many people to learn some information.


Ai is looking to become the basic interface to everything. Everything you do will have ai between you and whatever you are consuming or producing, whether you want it or not.

I don't know why anyone would pretend not to recognize the importance of that or attempt to downplay it.


I find it equally ridiculous for the 'stop treating everyone like children crowd to pretend that removing all restraints preventing things like people asking how to make explosives or getting AI to write paedophilic fanfic or making CSA imagery is a solution either.

ie, both sides have points, and there's no simple solution :(


And what’s that point of information control?

I’m not a law abiding citizen just because I don’t know how to commit crimes and I don’t believe anyone is.

It’s not lack of knowledge what’s stopping me from doing bad things and I don’t think people are all trying to do something bad but they can’t because they don’t know how.

This information control bs probably has nothing to do with the security.


What is simple is that the bad use of knowledge does not supercede or even equal the use of knowledge in general.

There are 2 things to consider:

The only answer to bad people with power is a greater number of good people with power. Luckily it happens that while most people are not saints, more people are more good than bad. When everyone is empowered equally, there can be no asshole ceo, warlord, mob boss, garden variety murderer, etc.

But let's say that even if that weren't true and the bad outnumbered the good. It actually still doesn't change anything. In that world there is even LESS justification for denying lifesaving empowering knowledge to the innocent. You know who would seek to do that? The bad guys, not the good guys. The criminals and tyrants and ordinary petty authoritarians universally love the idea of controlling information. It's not good company to side yourself with.


Wikipedia has formulas for producing explosives. For example, TATP:

> The most common route for nearly pure TATP is H2O2/acetone/HCl in 1:1:0.25 molar ratios, using 30% hydrogen peroxide.

Why are you uncomfortable with an AI language model that can tell you what you can already find for yourself on Google? Or should we start gating Wikipedia access to accredited chemists?


The companies are obviously afraid of a journalist showing "I asked it for X and got back this horrible/evil/racist answer" - the "AI Safety" experts are capitalizing on that, and everyone else is annoyed that the tool gets more and more crippled.


What’s wrong with paedophilic fanfic?


Nothing, just like there's nothing wrong with racist fanfics. The line should be drawn when someone rapes a child or hangs a black person.


I would say that there's enough wrong with paedophilic fanfic that companies which rent LLMs to the public don't want those LLMs producing it.


You didn’t answer the question.


Your point being.


I want to know what the peint is of preventing AI from writing paedophile fictlon.


AI generally? I would say there's no point in that, it's an undue restriction on freedom.

AI provided as SaaS by companies? They don't want the bad press.


Also end users complain - I ran an LLM for language learning with millions of users, and we made extra calls to the OpenAI content moderation API because both we and our users wanted it to stay polite and "safe". Most corporate applications have this need.


... Good point. No actual children are harmed. In fact it could help decimate demand for real-life black markets.


That is some big strawman you've built there and jumped to some heck of a conclusion.


I don’t think it is a straw man.

The point is for now, trying to make LLM’s safe in reasonable ways has uncontrolled spillover results.

You can’t (today) have much of the first without some wacky amount of the second.

But today is short, and solving AI using AI (independently trained critics, etc.) and the advance of general AI reasoning will improve the situation.


There's a pretty vast gulf between "unwilling to answer innocent everyday questions" and "unwilling to produce child porn".


Truthfully it’s not unlike working with a security consultant.


Or internal security - who look at your system and say doing that process that way is insecure please change it. When you ask how (as you aren't a security expert) they say not our problem and don't say how to fix it.


Sounds like you have a bad infection of security compliance zombies.

You should employ some actual security experts!


In the security team there were experts but I suspect the issue was that if they suggested a solution and it did not work or I implemented it incorrectly then they would get the blame.


A security consultant tells you best practice, they do the very opposite of not letting you know how things work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: