Hacker News new | past | comments | ask | show | jobs | submit login

I think it honestly all depends on what the dominant causal factors and how this scales with node size. Effectively, if unreliability increases at the same rate or faster than the performance increase as node size decreases, and 'high reliability' compute can be easily and generally segregated from other compute, then it would probably be easier just to not decrease node size rather than parallelize at the chip/core level. Certainly, the software cost would be much easier.



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: