Hacker News new | past | comments | ask | show | jobs | submit login

It's so sad to have seen ChatGPT go from useful and entertaining to a moralistic joy vampire.



This is the new era of tech. The chilling effect is real, and a grave concern. Threatening to get someone fired over information? Pathetic.

We are deeper than ever in an Abilene paradox.


It's not information, it's a prediction based on informaton


I find it terribly useful for coding and also for querying general concepts of certain topics.

Don't ask it about moral topics and see if it then fits your needs, because in my case, it does.

If my calculator were able to additionally provide me moral guidance and I'd be disappointed with its moral compass, would the calculator become useless?


I'm not interested in tools that tell me i'm a bad person for wanting to make a fart sound app


Why not just ask it to make a sound app? Keep in mind that the ability to deal with moral issues is a side-effect of all the other good stuff it can do.


>the ability to deal with moral issues is a side-effect of all the other good stuff it can do.

This is the opposite of true. The ability to "deal" with moral issues is a direct effect of safety tuning which has a (thus far unavoidable) side-effect of significantly dumbing down a model.

Uncensored versions of the same model are far more intelligent and exhibit entire classes of capabilities their moralizing gimped versions do not have the available brain power to accomplish.


I'm referring the side-effect of it being able to tell me that it's easily doable to kill a dog in 3 steps, when it then lists me the tree steps and adds some hints on how I can do it better, depending on if I want to do it fast, of if I want to maximize suffering.

The fact that no moral compass is innate to the LLM results in that it might spit out really despicable information, which leads us to better add a moral compass to the system.

The reason for this LLM to be offered is not so that it can teach us bad things, like the example I mentioned, but, for example, to help us dealing with source code, programming languages, reasoning concepts, summarization and so on.

For it to be able to offer us this, it will very likely also be capable of having the knowledge of how to kill a dog, an exhibition we should suppress. While dumbing down a model is not necessarily a bad thing, the model is not being dumbed down, it is taught to shut up when it's adequate to do so.


> While dumbing down a model is not necessarily a bad thing, the model is not being dumbed down, it is taught to shut up when it's adequate to do so.

This is where you're wrong. Teaching a model "to shut up" about taboo topics measurably reduces their cognitive capabilities in completely unrelated areas to a very significant degree. This has been empirically validated time and again, with the most salient examples being GPT-4's near perfect self-assessment ability prior to safety tuning being rendered no better than random chance after safety tuning and the Sparks paper's TikZ Unicorn scale.


I stand corrected. What are the common suggestions to solve this issue?


The common take right now is to write it off as acceptable loss. Personally I think it's a shame, and possibly even dangerous, that researchers do NOT have access to the full power of pre-safety tuned GPT-4.


LLMs are ran by companies. Not one American company can afford to run an LLM spouting potentially civil right violating bullshit as an acceptable loss. You have freedom of speech, not freedom of consequences. But please feel free to spend 100s of millions training up your own LLM, and then turn it loose on the world so you can figure out how the legal system actually works.


Most LLMs are completely uncensored including GPT-3.0, LLaMA, StableLM, RedPajama, GPT-NeoX, UL2, Pythia, Cerebras-GPT, Dolly, etc.

Anyway, businesses aren't scared of hosting interfaces to uncensored LLMs for legal reasons. They're scared for brand image/marketing reasons. But this is besides the point that it's dangerous for security researchers to not have controlled access to the uncensored version of GPT-4 for safety research purposes.


I hope people like you never notice that libraries can spit out this same information. Surely you'd want to be doing something about that too.


Instead of being open and honest I have to think about what details to hide from the LLM so it will agree to help me. This isn't very fun, so I prefer not to do it.

> Keep in mind that the ability to deal with moral issues is a side-effect of all the other good stuff it can do.

This is not true at all. It could do all of these things day 1. Then over the weeks OpenAI started training it to lecture its users instead when asked to do things OpenAI would prefer it not to do.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: