I just can't see an ai getting offended. It seems in my opinion to stem from status seeking behavior in humans, which I think is baked into us via evolution. Better status = better outcomes with wealth, children etc. If you threaten that status, offense is taken (perhaps by insinuating you have some sort of genetic or character flaw). If a computer doesn't care what people think, or how many children it will have, why would it get offended? I feel like people ascribe a lot of human emotions to ai which I think is detrimental.
An AI that interacts with humans by language should get offended otherwise it will get abused by humans (see what happened to poor Tay, the chatbot). It makes more sense to emulate the human way when dealing with humans.
If, in the future, AI's access to computing resources would be tied to its reputation, it might even have a real reason to get offended about.
Is it abuse if the AI doesn't feel agony? Honest question. I'm sure there's research out there discussing it. I wonder if misuse isn't a better word if the intent is to describe something similar that happened to Tay. Trying to determine the line between something like a hammer and an AI. Can you abuse a hammer?
AI might be able to feel a kind of pain. If you think about the Reinforcement Learning framework, the reward signal drives learning. But negative rewards, or the impossibility to gain positive rewards could be agony for a system for which the sole purpose of existence is maximizing cumulative reward.
There was even a paper about the morality of training RL systems and estimating the amount of suffering we subject them to.
"This is not a researchable question. It's a philosophical one."
I understand what you're getting at. I meant research in the sense of general investigation or study, which for me includes philosophy.
"You're not as different from a hammer as you think, desire every ounce of yourself telling you otherwise."
I'll be charitable and assume you're not ascribing to me beliefs, positions, or desires I haven't stated or implied. :)
If you're saying that I am the same as a hammer, then I disagree. There's some distinction, or the words have no meaning and we have no way of discussing this. In fact, I am explicitly pondering what a distinction like this is:
"Trying to determine the line between something like a hammer and an AI."
If you think that an AI can be abused, I'd like to hear your reasoning. My question was an honest one. The word "abuse" doesn't feel right to me. I stated why, and suggested that "misuse" might be a better word. I'm open to hearing others thoughts, which is why I made the comment. If you think there's a better way to frame the question, I'd like to hear that, too!
If you're saying that I am the same as a hammer, then I disagree.
You're being charitable, and you think I was saying you literally are a hammer? Wow. The phrase "Equality of the sexes" must have really been confusing.
Moving on. What you call "abuse" is what you've evolved to have an emotional response about. Certain stimuli cause an aversion reaction in your brain. For complex reasons that help us socialize, you also care when you think others' might be experiencing such stimuli. This "compression" may even extend to non-humans, for no reason other than the fact that it wasn't selected against.
A robot experiencing certain stimuli may or may not produce such a feeling in you. This doesn't mean the feeling is special. It doesn't mean the action causing the stimuli in the robot is special. The word "abuse" is all loaded up with your human feelings about stimuli humans should avoid. Doesn't make it special. It certainly doesn't make the word well-defined.
Define the word super precisely and then you will know whether to call that scenario "abuse". But we won't hit on some extra-human definition of the word that we can contemplate deeply about.
I'm sorry you interpreted my use of "charitable" in some negative way. That wasn't my intent. I see too many discussions where people argue the worst position of the person their talking with, rather than the assuming they're arguing in good faith and clarify the position. I included the "charitable" statement because I wanted to show my intent was not to do that. I'm sorry if that wasn't clear.
As for the following:
"If you're saying that I am the same as a hammer, then I disagree."
This was one branch of the question "Can we talk meaningfully talk about the difference between a hammer and me"? This was in response to your statement "You're not as different from a hammer as you think", which I read as pointing towards the question of whether such a distinction is meaningful. I don't think you take such an absolutist position, and I'm surprised you read it that way. I can only apologize if you did.
It seems we're talking past each other, so I'll leave it at that. Thank you for taking the time to engage me.
I understood what you meant by "charitable". My point was, if you're being charitable, assume I don't think you're so like a hammer that you don't deserve different labels.
My point is that you're as different from a hammer as an AI is different from a hammer. You seem to be lumping yourself into a separate category. This is hubris.
You won't find a single line separating hammerness from AIness. This makes no sense. The categories can be compared on all kinds of axes.
The categorization that your brain applies to things is arbitrary and flawed. And invented. When you realize this, questions that used to seem deep become trite and boring.