> I'm glad not, because I wouldn't be sure what they mean by that.
Anyway such a conversation would be useless without defining exactly what 'intelligence' even means.
> I'm convinced that anyone who believes otherwise just doesn't have the technical expertise (and that's not a generic insult; it's true of most of us) to probe deeply enough to see the fragility of these models.
'It doesn't seem like it something I would value and anyone who doesn't agree isn't an expert'.
There are experts in technical fields who would find value in having an AI triage patients, or double check materials strengths for building projects, or to cross reference court cases and check arguments for them. As the parent to whom you are replying noted: it is just the accuracy that needs to be fixed.
> Having existing ideas be juxtaposed, recontextualised or even repeated can be valuable. But to claim that AI will be writing entire books that are worth reading seems laughable to me.
There are no grand insights in technical reference books or instructional materials, just "existing ideas...juxtaposed, recontextualised" and "repeated".
> 'It doesn't seem like it something I would value and anyone who doesn't agree isn't an expert'.
Don't even pretend you think that's what I'm trying to say. Read my response again you need to.
> There are experts in technical fields who would find value in having an AI triage patients
I already said this. I'm not saying experts in technical fields cannot find value in AI; I'm saying that they have enough knowledge to be able to expose the fragility of the models. Triaging patients doesn't require expert knowledge, which is why AI can do it.
> or to cross reference court cases and check arguments for them
I seriously doubt that, actually. Are you, or is anyone else saying this, a lawyer? My position is that you have to be an expert to be able to evaluate the output of the models. It looking like it knows what it's talking about simply isn't enough.
The parent you replied to said that they could write books which would be useful for instruction if the accuracy problem were solved. You said they would never write anything worth reading because they can't be insightful.
What I said was 1) they could write instructional and reference books and no one claimed they were insightful and 2) that they are useful to experts even if they are not insightful.
I'm not sure what we are arguing about anymore if you are not going to dispute those two things.
Anyway such a conversation would be useless without defining exactly what 'intelligence' even means.
> I'm convinced that anyone who believes otherwise just doesn't have the technical expertise (and that's not a generic insult; it's true of most of us) to probe deeply enough to see the fragility of these models.
'It doesn't seem like it something I would value and anyone who doesn't agree isn't an expert'.
There are experts in technical fields who would find value in having an AI triage patients, or double check materials strengths for building projects, or to cross reference court cases and check arguments for them. As the parent to whom you are replying noted: it is just the accuracy that needs to be fixed.
> Having existing ideas be juxtaposed, recontextualised or even repeated can be valuable. But to claim that AI will be writing entire books that are worth reading seems laughable to me.
There are no grand insights in technical reference books or instructional materials, just "existing ideas...juxtaposed, recontextualised" and "repeated".