I wonder if the coming AI Customer Service chatbots will be programmed with “sludge” as part of their operating procedure or can we expect an Asimov-like set of ethics from it where it will optimize to be as helpful to the best extent possible. Software does not need an attitude and it won’t get tired either.
Even worse: I got a sales call from Backblaze a few weeks ago that was an AI voice agent. It seemed super suspicious the way it was talking, so I asked it directly if it was an AI, and it then said yes.
I asked it to talk to a real person: a manager, legal, or compliance employee and it hung up on me
That is an illicit robocall, and you can pursue Backblaze under the Telephone Consumer Protection Act. I would recommend filing a small claims court case, there is no gray zone for Backblaze to be making AI robocalls in.
For transparency to others here, here's what happened:
I submitted a support request and separately a GDPR request for my information and removal. I let the legal team at Backblaze know what happened as well by e-mailing legal@.
- The support request auto-responded with "We will respond to your support request (<insert ticket number here>) within one day." That was 21 days ago. No response.
- The legal team stated that my information has never been sold to 3rd parties. Strange unless Backblaze is operating its own AI cold calling en masse and then refused to complete my GDPR request of telling me the data it had collected on me. They refused to acknowledge that I had gotten an AI cold call
So no. This is frankly a BS path forward. Nobody at Backblaze as far as I can tell is taking this seriously
I wouldn't expect ethics to emerge as a feature any time soon. If anything, it will be easier to have the machine do the wrong thing as the machine does not get squeamish.