Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's that prior to Skylake, ret instruction prediction only ever used the "return stack buffer" (RSB), but since Skylake the indirect predictor is used as a backup. The RSB is a prediction mechanism specifically for call and ret pairs: each ret is predicted to return the instruction following the corresponding call.

This works great for the common pattern of matched pairs, but the stack used to track outstanding calls has a limited size (32 in Skylake), let's call it N. If you have N + M calls followed by N + M rets (or any other pattern where the call chain gets that deep), you will predict correctly the first N rets, but then the stack is exhausted and the last M rets won't be predicted by the RSB.

Prior to Skylake, those last M rets just wouldn't get predicted at all (probably instruction fetch would just fall through to the instructions following the ret), but in Skylake the indirect branch-predictor, which is usually used to predict jmp or call instructions to variable locations, is used as a fallback instead.

So the concern is that people could train the indirect predictor prior to a kernel call, and if the 32-deep RSB was ever exhausted, the indirect predictor could kick in causing a Spectre-like vulnerability.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: