If an LLM can't be left to do mowing by itself, but a human will have to closely monitor and intervene at every its steps, then it's just a super fast predictive keyboard, no?
And one might also wonder still if we need a general language model to mow the grass or just a simpler solution towards to problem of driving a mower over a fixed property line automatically. Something you could probably solve with wwii era technology, honestly.
Obviously not. I want legislation which imposes liability on OpenAI and similar companies if they actively market their products for use in safety-critical fields and their product doesn’t perform as advertised.
If a system is providing incorrect medical diagnoses, or denying services to protected classes due to biases in the training in the training data, someone should be held accountable.
They would want to, if they thought they could, because doing so would unblock a ton of valuable use cases. A tax preparation or financial advisor AI would do huge numbers for any company able to promise that its advice can be trusted.
What does it even mean? How do you imagine that? You want OpenAI to take on liability for the kicks of it?