Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

the thing is that pipelining costs latency, great if you're making something where all the inputs come into the input stage at the beginning and come out of the output stage at the end. Not so good if you want to make something like a CPU where the output of one instruction is the input for another which can result in pipe bubbles - clock speed, latency (pipe stages) etc are tradeoffs - one wants to maximise instructions-per-clock times clock speed for meaningful benchmarks

Merging arbitrary clock domains is an understood problem, simply put we know it can't be done reliably, one simply has to make it "reliably enough" - I built a graphics controller once where we did the math on synchoniser failure and decided that we were more reliably than Win95 by 2 orders of magnitude and that that would be good enough ...

Async stuff tends to be clocked stage to stage at the local level so that data generates it's own clock equivalent when it's done (a 'done' signal)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: