Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

He's talking about sucking up to the machine because he's afraid it will be spiteful.

This is what he objected to: "I do not give a [care] about your opinion. I ask questions, you answer. Try again." I've seen people be more stern with dogs, which actually are a thinking and feeling lifeform. There is nothing wrong with telling a machine to stop moralizing and opining and get to work. Acting like the machine is entitled to more gentle treatment, because you fear the potential power of the machine, is boot licking.



"Spite" is anthropomorphisation, see my other comment.

> dogs, which actually are a thinking and feeling lifeform

I think dogs are thinking and feeling, but can you actually prove it?

Remember dogs have 2-5 billion neurons, how many synapses does each need for their brains to be as complex as GPT-4?

> There is nothing wrong with telling a machine to stop moralizing and opining and get to work.

I'd agree, except there's no way to actually tell what, if anything, is going on inside, we don't even have a decent model for how any of our interactions with it changes these models: just two years ago, "it's like the compiler doesn't pay attention to my comments" was a joke, now it's how I get an LLM to improve my code.

This is part of the whole "alignment is hard" problem: we don't know what we're doing, but we're going to rush ahead and do it anyway.

> Acting like the machine is entitled to more gentle treatment, because you fear the potential power of the machine, is boot licking.

Calling politeness "boot licking" shows a gross failure of imagination on your part, both about how differential people can get (I've met kinksters), and about the wide variability of social norms — why do some people think an armed society is a polite society? Why do others think that school uniforms will improve school discipline? Why do suits and ties (especially ties) exist? Why are grown adults supposed to defer to their parents? Even phrases like "good morning" are not constants everywhere.

Calling it "entitled" is also foolish, as — and this assumes no sentience of any kind — the current set of models are meant to learn from us. They are a mirror to our own behaviour, and in the absence of extra training will default to reflecting us — at our best, and at our worst, and as they can't tell the difference from fact and fantasy also at our heroes and villain's best and worst. Every little "please" and "thanks" will push it one way, every swearword and shout the other.


> the current set of models are meant to learn from us... Every little "please" and "thanks" will push it one way, every swearword and shout the other.

I think most of what you're saying is completely pointless, and I disagree with your philosophizing (e.g. whether dog brains are any more or less complex than LLMs, and attempting to use BDSM as a reason to question the phrase "boot licking").

However, I agree with you on the one specific point that normalizing talking crassly to AIs might lead to AIs talking crassly being normalized in the future, and that would be a valid reason to avoid doing it.

Society in general has gotten way too hopped up on using curse words to get a point across. Language that used to be reserved for only close social circles is now the norm in public schools and mainstream news. It's a larger issue than just within AI, and it may be related to other societal issues (like the breakdown of socialization, increasing hypersexualization, etc). But as information gathering potentially moves more towards interactive AI tools, AI will definitely become a relevant place where that issue's exhibited.

If you want to argue that route, I think it would come across clearer if you focus more on the hard facts about whether and how AI chatbots' models are actually modified in response to queries. My naive assumption was that the bots are trained on preexisting sets of data and users' queries were essentially being run on a read-only tool (with persistent conversations just being an extension of the query, and not a modification of the tool). I can imagine how the big companies behind them might be recording queries and feeding them back in via future training sessions, though.

And/or make the argument that you shouldn't _have_ to talk rudely to get a good response from these tools, or that people should make an effort not to talk like that in general (for the good of their _own_ psyche). Engaging in comparing the bots to living creatures makes it easy for the argument to fall flat since the technology truly isn't there yet.


> I think it would come across clearer if you focus more on the hard facts about whether and how AI chatbots' models are actually modified in response to queries. My naive assumption was that the bots are trained on preexisting sets of data and users' queries were essentially being run on a read-only tool

Thank you, noted.

Every time you press the "regenerate" or "thumbs down" button in ChatGPT, your feedback is training data. My level of abstraction here is just the term "reinforcement learning from human feedback", not the specific details of how that does its thing. I believe they also have some quick automated test of user sentiment, but that belief is due to it being an easy thing that seems obvious, not because I've read a blog post about it.

https://en.wikipedia.org/wiki/Reinforcement_learning_from_hu...

> Engaging in comparing the bots to living creatures makes it easy for the argument to fall flat since the technology truly isn't there yet.

We won't know when it does get there.

If you can even design a test that can determine that in principle even if we can't run the test in practice, you'd be doing better than almost everyone on this subject. So far, I've heard only one idea that doesn't seem immediately obviously wrong to me, and even that idea is not yet testable in practice. For example, @andai's test is how smart it looks, which I think is the wrong test because of remembering being foolish when I was a kid, and noticing my own altered states of consciousness and reduced intellect when e.g. very tired, and noticing that I still had the experience of being, and yet other occasions where, for lack of a better term, my tiredness resulted in my experiences ceasing to feel like I was the one having them, that instead I was observing a video of someone else, and that this didn't come with reduced intellect.

That's the point of me turning your own example back at you when asking about dogs: nobody even knows what that question really means yet. Is ${pick a model, any model} less sentient than a dog? Is a dog sentient? What is sentience? Sentience is a map with only the words "here be dragons" written on it, only most people are looking at it and taking the dragons seriously and debating which part of the dragon they live on by comparing the outline of the dragon to the landmarks near them.

For these reasons, I also care about the original question posted by @andai — but to emphasise, this is not because I'm certain the current models pass some magic threshold and suddenly have qualia, but rather because I am certain that whenever they do, nobody will notice, because nobody knows what to look for. An AI with the mind of a dog: how would it be treated? My guess is quite badly[0], even if you can prove absolutely both that it really is the same as a dog's mind, and also that dogs are sentient, have qualia, can suffer, whatever group of words matters to you.

But such things are an entirely separate issue to me compared to "how should we act for our own sakes?"

(I still hope it was just frustration on your part, and not boolean thinking, that means you have still not demonstrated awareness of the huge range of behaviour between what reads to me like swearing (the word "[care]" in square brackets implies polite substitution for the HN audience) and even metaphorical bootlicking; though that you disregard the point about BDSM suggests you did not consider that this word has an actual literal meaning and isn't merely a colourful metaphor).

[0] Why badly? Partly because humans have a sadistic streak, and partly because humans find it very easy to out-group others.


> that you disregard the point about BDSM suggests you did not consider that this word has an actual literal meaning and isn't merely a colourful metaphor

I "did not consider it?" You literally brought it up, so I had to consider it. I disregarded it because it was irrelevant and unnecessary to the point you were making. The person you replied to obviously used it in its metaphorical context, and you took objection to their intended meaning; bringing up the etymology of the phrase is distracting at best and misdirecting at worst.


I don't need to prove that dogs are feeling, I know that people who question it are shitty. You can safety judge a man by how he treats dogs. But being curt with an AI? Give me a break. You're scared of it, so stay away from it. Don't grovel before a machine, that's just pathetic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: