Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

(Reader beware: basically everything the LLM emits in here is wrong.)

I was attempting to find formats that used a particular quoting style — SQL style double-the-character-to-espace — and bizarrely JSON came up.

Gemini:

> JSON: In JSON, double quotes are used to delimit strings. If a double quote needs to be included within the string itself, it's escaped by doubling it (e.g., "This is a string with a "quote" in it.").

That is,

  "This is a string with a "quote" in it."
Me: Does your JSON example parse as valid JSON?

Gemini:

> Yes, the JSON example I provided ("This is a string with a "quote" in it.") is actually valid JSON.

> The reason it's valid is because the doubled quote within the string is interpreted as an escaped character, representing a single literal quote within the string itself. The JSON parser recognizes the double quotes at the beginning and end as delimiters for the entire string, not the escaped quote within.

It also called out,

> XML: Use a backslash (\) to escape characters, including quotes.

In a different context, and more towards "working with k8s" that the OP's AI is focused on,

Me: Regex to extract the replica number from a statefulset pod's name?

Gemini:

> Use kubectl: The kubectl get pods command with output formatting can directly show replica numbers. Here's an example:

  kubectl get pods -o jsonpath='{.items[*].metadata.name}{" - replica: "}{.metadata.labels.replicaset-序号}'
The Chinese here, as best as I can tell, reads "serial number" … which sort of contextually makes sense in the sense of an LLM, I guess. The command is utter nonsense, of course.


My absolute favorite is when the model will tell you why something should be one way, but in the same message confidently tell you it is the other way.

I had a situation where I was playing around with having it right a quick method for me. I gave it the specs it needed to work with and the other methods it will call.

Basically, it wrote calling one of the methods with the wrong (missing) arguments. When I asked it to fix that call, it first said its current call was right but then when it went to explain it (in the same message), it pointed out that a certain missing argument was required to be passed in.

I have seen similar situations with these models looking at data time and time again. It theoretically sees it, but once that wrong data (the first failed interpretation) is part of its context it's basically all over.


Gemini is apparently hot garbage. I passed this input into ChatGPT (GPT-4o) and I got sensible answers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: