Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The new 2.5 Pro (05-06) definitely does not have any sort of meaningful 1 million context window, as many users have pointed out. It does not even remember to generate its reasoning block at 50k+ tokens.

Their new pro model seemed to just trade off fluid intelligence and creativity for performance on closed-end coding tasks (and hence benchmarks), which unfortunately seems to be a general pattern for LLM development now.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: