Hacker News new | past | comments | ask | show | jobs | submit login

> and didn't instruct the grad student to pay $20 to use GPT-4

An inexcusable oversight... more so on the grad student than Knuth.

For example, Knuth's complaint about the question "What is an optimystic?":

> Answer #5 also pretty good. (Again it begins with "I'm sorry".) But it should have conjectured a mystic who is an optimist.

And here is GPT4's answer to the same question:

---

As of my knowledge cutoff in September 2021, "Optimystic" is not a recognized term in standard English. However, it could be a play on words combining "optimist" and "mystic."

...

So, hypothetically, an "optimystic" could refer to someone who combines these attributes - perhaps someone who is hopeful about the future and sees the world through a spiritual or metaphysical lens.

...

---

Similarly, on question #18, Knuth complains that GPT doesn't know the stock market is closed on Saturday, yet the GPT4 answer begins:

> As of my last training data in September 2021, and generally speaking, stock markets such as the NASDAQ are closed on weekends, including Saturday.

Those were just 2 I randomly checked.




"Similarly, on question #18, Knuth complains that GPT doesn't know the stock market is closed on Saturday, yet the GPT4 answer begins"

Both ChatGPT and GPT-4 seem to know that NASDAQ is closed Saturday, but at least to me, both "forget it" and answer with a boilerplate disclaimer that it can't predict the stock market when you ask them the exact question made by Knuth.

This seems to be part of its "programming". It also has super long disclaimers when asked about life advice, relationship advice, or legal advice, and those disclaimers seem to take precedence over prompts you give ("be concise" is thrown out the window), or even the questions themselves.


Wow. Seriously? It can make an inference like that?

I wonder if “optimystic” shows up at all in the training data or if this was purely from some ability to detect those two source words.


Short answer: for all practical purposes, yes, it can and it does.

For each specific example, there is no way to tell for sure (afaik) if the example was in the training set. But you can easily run some experiments yourself, inventing your own words which would not likely be in the training set, especially when taken together.

I have done this, and GPT4 will frequently make inferences on par with the "optimystic" one. For example I just tried "surfrandma" and it said "It appears to be a combination of the words "surf" and "grandma", but without additional context, it's challenging to provide a precise meaning."


It can do so much more that the fact that it can go from "optimystic" to "optimistic" and "mystic" is extremely mundane in comparison.


Like what? And how does one measure that it is more impressive or less mundane?


Like just about anything. And the measure is something like "does someone who has spent some time with GPT-4 find it at all surprising that it can do X". A posteriori, it would be much more surprising if GPT-4 failed to resolve "optimystic" to "mystic" and "optimistic". Even though it's handicapped by its encoding when it comes to wordplays.


Its the problem with fully proprietary AI like this: You cannot prove that this question and this answer wasnt in the training set, so you cannot argue for its ability to infer or reason.


You can't prove that they aren't answering ChatGPT questions with real humans, either.


You're making my point for me. Exactly, a fully closed source language model cannot be evaluated because there is no way to know why it replies the way it does. My point exactly.


Yeah, I agree, and I didn't really mean for my reply to be a rebuttal to your point at all. :)


> "As of my knowledge cutoff in September 2021"

> "However, as an AI language model, I don't"

...

Why don't they just use an emoji to replace this whole boilerplate phrase? It would make it more bearable. For each of the boilerplate phrases one emoji. Or just have a bunch of tags #Cutoff_2021, #LM_can't

In my native tongue, this kind of speaking is called "wooden language" and it is considered insulting.


Your proposed alternatives are much worse, because they are esoteric and confusing.


I'm just imagining a random elderly person trying ChatGPT for the first time and getting a robot emoji with #Cutoff_2021 after asking a question about Donald Trump


Would you mind sharing what your native tongue is? The negative connotation of "wooden language" is fascinating. [1]

[1] Just a note for others similarly fascinated by these sorts of linguistic items, there's an excellent book that explores this concept space: Metaphors We Live By, George Lakoff and Mark Johnson


I'm not the person you replied to, but in my native tongue (English), excessive repetition is also poor usage. Repeating the question too literally is indicative of unsophisticated (pre-college) writing, and repeating the same phrases word for word a signal that you don't believe your listener is paying attention to your words (as opposed to rephrasing, which signals that your prior explanation might have been unclear).

I've been a bit shocked how poor ChatGPT's usage is - it writes more like a very articulate 15 year old than like an adult - and how nobody else seems to notice. I can't help but think part of the reason nobody is noticing is that most of the attention is coming from engineers (for whom language is not a top skill).


Everybody noticed. It's what people mean when they refer to a comment sounding like it was written by ChatGPT.

I suspect it's a deliberate choice, much as The Sun newspaper aims at an 8 year old reading level, while newspapers like The Times or Guardian aim at 14 year old. Try asking ChatGPT to shift to a more advanced level.

Also, the whole "say what you're going to say, say it, say what you said" technique is very common because it works. Even "smart" people don't remember things quite as well as they think they do.


> I've been a bit shocked how poor ChatGPT's usage is - it writes more like a very articulate 15 year old than like an adult - and how nobody else seems to notice.

No, we're just mesmerized that a freaking machine, a bunch of PCBs and wires, can fairly convincingly impersonate a 15 year old, including making stuff up with great confidence.


The expression exists in English:

Wooden language is language that uses vague, ambiguous, abstract or pompous words in order to divert attention from the salient issues.

https://en.wikipedia.org/wiki/Wooden_language


I thought they meant it in the context of boilerplate, which is a little different than what’s described in the wiki link. But I think we’re probably just talking about subtle shades and degrees of of the sense. I had thought the original comment was referencing a non-English term that had a literal translation to English as “wooden” but with a subtle difference in meaning than it’s usage in English.

I may have been overthinking things— I do that (and I don’t count it as an inherently positive trait) but the general topic is still interesting and still highly recommend the book I referenced.


In Italian we use "wooden" also to mean "lacking in grace or agility, rigid, awkward".


I think they have to hedge this way to "make everyone happy", including twitter or publications that want to shame them for what their chatbot has said.


It makes sense that in another language you might not phrase things this way. But in English we do.


I just tried asking ChatGPT #5 and it answered this:

I'm sorry, but the term "optimystic" does not have a widely recognized or established meaning. It appears to be a combination of the words "optimistic" and "mystic," [...]


Google Scholar found some uses, like Beyond Boredom and Anxiety: The Experience of Play in Work and Games. by Mihaly Csikszentmihalyi, Review by: Murray S. Davis Source: Contemporary Sociology , Mar., 1977, Vol. 6, No. 2 (Mar., 1977), pp. 197-199 at https://www.jstor.org/stable/pdf/2065805.pdf

> Sociologists will find most provocative the author's alternative to Erving Goffman's analysis of self-consciousness. Both are mystics in the sense that they investigate the conditions causing someone to lose self-consciousness. But Goffman is what I would call a pessimystic, for in Frame Analysis (1974:378ff) he examines how the self disappears in the "negative experience" that results when situational contradictions increase its stress; Csikszentmihalyi is an optimystic, for he ex- amines how the self disappears in the "flow experience" that results when situational consonances decrease its stress

and "Anglophonia and Optimysticism: Sebastian Knight’s Bookshelves"

> The Anglophone universe becomes a linguistic afterlife in which Nabokov optimistically hopes to resurrect his Russian art, just as he “optimystically” (the pun belongs to Sebastian Knight’s “Dean Park”) expects that the otherworld preserves the spirits of his dead.

Further, https://archive.org/details/libraryjournal122sep/page/n489/m...

> Coauthors Taylor and Crain discuss the concept of "optimysticism," first intro- duced in Taylor's Messengers of Light. The phrase refers to the ability to see beyond the worst of situations to the mystery of goodness at the core of life.

and from 'The optimystic's handbook' at https://archive.org/details/optimysticshandb00tayl/page/n15/...

> Optimysticism is the choice we make not only to experience the best of this world but also to see beyond this world into eternity, and in doing so, to live the mystery of the fullest here on earth.

No well established meaning.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: