Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In 1st year engineering we learned about the concept of behavioral equivalence, with a digital or analog system you could formally show that two things do the same thing even though their internals are different. If only the debates about ChatGPT had some of that considered nuance instead of anthropomorphizing it, even some linguists seem guilty of this.


Isn’t anthromorphization an informal way of asserting behavioral equivalence on some level?


The problem is when you use the personified character to draw conclusions about the system itself.


No because behavioral equivalence is used in systems engineering theory to mathematically prove that two control systems are equivalent. The mathematical proof is complete, e.g. for all internals state transitions and the cross product of the two machines.

With anthropormization there is zero amount of that rigor, which lets people use sloppy arguments about what ChatGPT is doing and isn't doing.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: