Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So.. how does it do this? :

  Me: To "smert" a word, I tell you every letter in it, and its place in the word, in order. For example, to smert "wertyu" I'd write:
  The first letter is "w".
  The second letter is "e".
  The third letter is "r".
  The fourth letter is "t".
  The fifth letter is "y".
  Lastly, the sixth letter is "u".
  
  Please smert "sdfkjhsndf" for me.
  
  ChatGPT: To "smert" the word "sdfkjhsndf", I would write the following:
  
  The first letter is "s".
  The second letter is "d".
  The third letter is "f".
  The fourth letter is "k".
  The fifth letter is "j".
  The sixth letter is "h".
  The seventh letter is "s".
  The eighth letter is "n".
  The ninth letter is "d".
  Lastly, the tenth letter is "f".
  
  This is a way of spelling out the letters in a word and their order, sometimes used as a learning or spelling aid. I hope this helps!
  
  Me: Please use that word I just asked you to smert in a sentence. It means "to not have an idea how a word is spelled".
  
  ChatGPT: Here is a sentence using the word "sdfkjhsndf", which means "to not have an idea how a word is spelled":
  "I sdfkjhsndfed the spelling of 'onomatopoeia' and had to look it up in the dictionary."


Interestingly it doesn't do it reliably when you give it an actual word.

    To smert "consequence," please see below:

    The first letter is "c".
    The second letter is "o".
    The third letter is "n".
    The fourth letter is "s".
    The fifth letter is "e".
    The sixth letter is "q".
    The seventh letter is "u".
    The eighth letter is "e".
    The ninth letter is "n".
    The tenth letter is "c".
    The eleventh letter is "e".
    Lastly, the twelfth letter is "s".


I don't know the inner working of chatgpt, but my best guess would be that they use something like Levenshtein distance on word to match to the nearest known word to reduce the space of known words and to accommodate to typos.

When it replied to you, it took the exact extract from the input that is between quotes but internally it is mapped to "consequences" closest word to "consequence," if we consider the comma as any character.


Good question! I don't know :)


Okay, somebody posted a thread on Twitter explaining how this works...

The language model is capable of generating python scripts to solve certain text-processing tasks, and then it re-prompts itself by reading the python outputs back into the language model. Very clever!

https://twitter.com/goodside/status/1598253337400717313

Other tricks include... prompting itself to lookup wikipedia entries, and then re-prompt itself with snippets from the resulting wikipedia page. Each user prompt is inserted into a template prompt with instructions to the model about the limitations of its capabilities.


Thank you, that's a fascinating thread.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: