Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Put together a decorator in Python that does time.sleep to get really close to constant time operations. Am I understanding the problem wrong? I'd think that something like that could be implemented in any programming language that needs constant time operations.

https://gist.github.com/beltiras/86294a3e746820e421080b1619b...



Directly addresses in the article:

> Can't the server just delay every password-checking response for exactly 1 millisecond, independently of how long the password checking took? It's normal for computers to have many operations that they're trying to get done simultaneously. Often operations interrupt other operations, and interruptions can last for many seconds when computers run out of RAM. Building a fixed-response-time mechanism that robustly accounts for this is an open problem.

> More fundamentally, when the password check finishes and switches to idling until the response time, the computer normally switches to other operations, speeding up those operations. This is often visible through the timing of those operations even if the timing of the password-checking response is controlled.


I answered a sibling of yours with similar concerns.


The problem is your "constant time" operation now spends some time doing work and some time not. When it's not doing work, the CPU can perform work for other processes or threads and this is, at least in principle, measureable. Thus you haven't eliminated the side channel - just changed the way someone can measure it.


If those operations have a similar cap they can't be measured either. If the load is too high, autoscaling will probably add a node.


If I'm able to influence this computation to take longer than the maximum time allotted for it, this technique will fail open & allow me to measure timings. This is common in the real world, because as the attacker/a user of your application, I generally have a lot of influence on the data being passed to your computation. So I can, for instance, pass giant strings. Or conduct many attacks in parralel to increase load on your system.

You can, of course, patch this by sleeping to the nearest interval. But that brings us to a second problem, this technique is quite expensive in terms of wall-clock time.

Very nice Python.


There are defenses against those attacks. You can put a maximum on the input for example. The idea here is that you can make the time jitter depend on things besides the thing being calculated. I'd hope that anyone having to employ code like this would think about those side issues.


Random time jitter can be denoised with statistics and a large enough sample size.

Even if it weren't when you sleep the machine now has more resources to process requests and an attacker can observe this to deduce information about the secret.

Ultimately you want the resources consumed by computation handling secret information to always be equivalent. Of course this means swimming upstream of decades of programming language, compiler and hardware optimizations / research. If you really want to get it done you'll find yourself writing assembly and getting really familiar with cpu behaviors.

A good resource from the article:

https://www.chosenplaintext.ca/articles/beginners-guide-cons...


You didn't read the code. What it tries to do is aim to standardize execution time of a function to a degree larger than the time it takes. Of course that isn't possible. But the jitter will be due to other things than the time it took to execute the workload. Autoscaling will make sure that lack of resources won't reveal secrets and if everything running on the same machine is similarly capped there isn't a side channel where you can measure anything. Throughput shouldn't be affected a lot since we are just artificially delaying the response somewhat, not increasing the computational workload of each request.


May I gently submit that it would be both more respectful of other people's time and more availing to yourself personally to dig into the information people are providing rather than debating them about the efficacy of this particular technique? I feel like there's a pattern here of people bringing up subtle CPU effects and you being dismissive of them. But subtle CPU effects are the bread and butter of this research.

This is a really deep topic that resists easy answers.


May I gently submit you are making the same mistake?


> The idea here is that you can make the time jitter depend on things besides the thing being calculated.

Yeah I imagine you could add some de-correlated jitter and rotate the noise pattern often enough that attackers can't filter it out.


time.sleep() is also probably more granular than the timing variations you’re trying to mask.


I don't think so. What I have read is a passive attacker just listening to requests and responses. time.sleep() granularity is in microseconds.


Over enough time the remote attacker can distinguish differences on the order of clock cycles.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: