I just can't see an ai getting offended. It seems in my opinion to stem from status seeking behavior in humans, which I think is baked into us via evolution. Better status = better outcomes with wealth, children etc. If you threaten that status, offense is taken (perhaps by insinuating you have some sort of genetic or character flaw). If a computer doesn't care what people think, or how many children it will have, why would it get offended? I feel like people ascribe a lot of human emotions to ai which I think is detrimental.
An AI that interacts with humans by language should get offended otherwise it will get abused by humans (see what happened to poor Tay, the chatbot). It makes more sense to emulate the human way when dealing with humans.
If, in the future, AI's access to computing resources would be tied to its reputation, it might even have a real reason to get offended about.
Is it abuse if the AI doesn't feel agony? Honest question. I'm sure there's research out there discussing it. I wonder if misuse isn't a better word if the intent is to describe something similar that happened to Tay. Trying to determine the line between something like a hammer and an AI. Can you abuse a hammer?
AI might be able to feel a kind of pain. If you think about the Reinforcement Learning framework, the reward signal drives learning. But negative rewards, or the impossibility to gain positive rewards could be agony for a system for which the sole purpose of existence is maximizing cumulative reward.
There was even a paper about the morality of training RL systems and estimating the amount of suffering we subject them to.
"This is not a researchable question. It's a philosophical one."
I understand what you're getting at. I meant research in the sense of general investigation or study, which for me includes philosophy.
"You're not as different from a hammer as you think, desire every ounce of yourself telling you otherwise."
I'll be charitable and assume you're not ascribing to me beliefs, positions, or desires I haven't stated or implied. :)
If you're saying that I am the same as a hammer, then I disagree. There's some distinction, or the words have no meaning and we have no way of discussing this. In fact, I am explicitly pondering what a distinction like this is:
"Trying to determine the line between something like a hammer and an AI."
If you think that an AI can be abused, I'd like to hear your reasoning. My question was an honest one. The word "abuse" doesn't feel right to me. I stated why, and suggested that "misuse" might be a better word. I'm open to hearing others thoughts, which is why I made the comment. If you think there's a better way to frame the question, I'd like to hear that, too!
If you're saying that I am the same as a hammer, then I disagree.
You're being charitable, and you think I was saying you literally are a hammer? Wow. The phrase "Equality of the sexes" must have really been confusing.
Moving on. What you call "abuse" is what you've evolved to have an emotional response about. Certain stimuli cause an aversion reaction in your brain. For complex reasons that help us socialize, you also care when you think others' might be experiencing such stimuli. This "compression" may even extend to non-humans, for no reason other than the fact that it wasn't selected against.
A robot experiencing certain stimuli may or may not produce such a feeling in you. This doesn't mean the feeling is special. It doesn't mean the action causing the stimuli in the robot is special. The word "abuse" is all loaded up with your human feelings about stimuli humans should avoid. Doesn't make it special. It certainly doesn't make the word well-defined.
Define the word super precisely and then you will know whether to call that scenario "abuse". But we won't hit on some extra-human definition of the word that we can contemplate deeply about.
I'm sorry you interpreted my use of "charitable" in some negative way. That wasn't my intent. I see too many discussions where people argue the worst position of the person their talking with, rather than the assuming they're arguing in good faith and clarify the position. I included the "charitable" statement because I wanted to show my intent was not to do that. I'm sorry if that wasn't clear.
As for the following:
"If you're saying that I am the same as a hammer, then I disagree."
This was one branch of the question "Can we talk meaningfully talk about the difference between a hammer and me"? This was in response to your statement "You're not as different from a hammer as you think", which I read as pointing towards the question of whether such a distinction is meaningful. I don't think you take such an absolutist position, and I'm surprised you read it that way. I can only apologize if you did.
It seems we're talking past each other, so I'll leave it at that. Thank you for taking the time to engage me.
I understood what you meant by "charitable". My point was, if you're being charitable, assume I don't think you're so like a hammer that you don't deserve different labels.
My point is that you're as different from a hammer as an AI is different from a hammer. You seem to be lumping yourself into a separate category. This is hubris.
You won't find a single line separating hammerness from AIness. This makes no sense. The categories can be compared on all kinds of axes.
The categorization that your brain applies to things is arbitrary and flawed. And invented. When you realize this, questions that used to seem deep become trite and boring.
The company is continuing to regress, having lost its edge (SJ).
They're completely on the process bandwagon of requiring more accessories, having lost the beauty and functionality of clever, practical simplicity which advances the user experience, instead of making it crappier and inconvenient for change's sake.
Tim Cook's gotta go, they need a more forceful visionary than a bean-counter. MacBook Pro's need to consider detachable displays, alternatives to keyboards, other modes of interaction, so on.
Yup, it's business suicide. Probably people too inexperienced to handle success or have a clue where to go from where they're at. So they throw up their hands, shutdown servers, let people down and miss their shot. It's sad, like watching a train wreck.
"Self-esteem" is a nice way of saying American children are becoming more arrogant with fewer skill, lower performance and less experience than ever. Adults, teachers and mentors need to do more to put them in their place, for their own good. Respect is earned, not entitled to "special snowflakes."
Ever since installing 10.12.1, I've been having a bunch of processes randomly entering a quasi-paused SIGSTOP-ish state (neither closable, apps not "bouncing" (loading) and just not responding. Running Instruments, correlating logs and such doesn't identify any clear cause. I'm having to `sudo kill -CONT -1` in order to get things moving again. I'm wondering if it's related to XNU mitigations or just some spurious "system configuration entropy" on my box.
I did exactly this when my Mac ran out of memory yesterday. Safari hung with a 'your computer is running out of memory' warning (168 tabs open!) and I didn't want to lose them all by force quitting. But the Safari process itself wasn't "Not Responding" and we were back to 0% CPU.
So I quit everything else, SIGCONT'd Safari, and it started responding again, so I tried unsuccessfully to close some tabs. Of course, Safari somewhat isolates pages in separate processes, so I ran `ps aux | grep WebContent | grep -v grep | cut -d' ' -f11 | xargs kill -SIGCONT` as well.
It all sprang back to life, and all the tabs I'd shut in vain zipped away. Got that one saved for later. It's probably easier just to use -1 now I've learned what that is!
I do wonder what's suspending these processes indefinitely. I should have done more inspection to see what state they were in. I'm not familiar with how WebKit content threads communicate though, so that's for another day.
I was going to say that it signals launchd's process group — every process spawned by launchd, which always runs with PID 1. However, the `kill` manpage confirms your hunch:
-1 If superuser, broadcast the signal to all processes; otherwise
broadcast to all processes belonging to the user.
(This is on macOS/iOS, Linux might have slightly different semantics.)
It's funny you mentioned processes entering a quasi-paused SIGSTOP-ish state. I swear I've been having tons of problems with Java/Tomcat the past week or so I've been on 10.12.1 (betas), and I keep thinking I broke my config by updating my Java version, changing my Tomcat config, or some other "system configuration entropy"! Nice to know I'm not alone and that I'm probably not crazy.
Tomcat 7.0.72 from Homebrew, Oracle Java 8u112, PostgreSQL 9.3 and 9.5.
Non-humans maybe (t/quad)rillionaires to eclipse us within our lifetimes because of the likelihood of runaway technological acceleration.
At the 1e9m systematic level, inorganic and hybrid sentient, self-replicating, self-improving systems seem an inevitable stage enabled by organic life.
Not necessarily. Japan recovered from nearly deforestation a-la Easter Island. Penalties and enforcement are required to prevent consequences of individual and collective externalities.
What AI? There is no AI at present, and no known way of making one. The "AI" we have now is all specialized to a single field, there are none that can generalize like humans - or even animals.
Just large numbers should not concern you - there are far more bacteria than transistors.
So when will Google Fibre fix the embarrassing Starbucks WiFi at:
247 reviews$$$$
3605 El Camino Real
Santa Clara, CA 95051
It's slower than communicating with smoke-signals in a hurricane, or boxing up each bit and sending via the post.
Alphabet/Google needs to work on finishing an actual business that they start and scaling faster. Search, email, maps are mobile are pretty good, but the million other areas lack business drive, passion, hustle and focus, competing in areas with much deeper pockets (ATT+DirecTV+Time Warner+..., Verizon+AOL+Yahoo+XO+..., Level3, ...)