Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Isn’t anthromorphization an informal way of asserting behavioral equivalence on some level?


The problem is when you use the personified character to draw conclusions about the system itself.


No because behavioral equivalence is used in systems engineering theory to mathematically prove that two control systems are equivalent. The mathematical proof is complete, e.g. for all internals state transitions and the cross product of the two machines.

With anthropormization there is zero amount of that rigor, which lets people use sloppy arguments about what ChatGPT is doing and isn't doing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: