Hacker News new | past | comments | ask | show | jobs | submit login

"Kuszmaul remembers being surprised that a tool normally used to ensure privacy could confer other benefits."

It was surprising to me too! But reflecting on it more closely, most performance isn't about "faster" in a literal sense of "more instructions run per time", but about carefully choosing how to do less work. The security property here being "history independence" is also in a way stating "we don't need to, and literally cannot, do any work that tracks history".

It's definitely an interesting approach to performance, essentially using cryptography as a contraint to prevent more work. What properties do we need, and what properties can we ignore? The question becomes if we MUST ignore this property cryptographically, how does that affect the process and the related performance?

It certainly feels like it may be a useful perspective, a rigorous approach to performance that may be a path to more improvements in key cases.






I don't think what you're saying is accurate. Your statement would be correct if the measure of how slow the algorithm is is how much compute time it takes.

But the measurement they're actually using is how many books need to be moved. They're free to use infinite compute time AIUI.


I think the point is that having less information available can improve performance because it eliminates processing of extraneous data.

Said another way, stateless approaches are simpler than stateful. There can be an instinct to optimize by using more information, but in this case at least, that was a red herring and adding the constraint improved the results.


If there's no cap on computation time processing extraneous data is free.

Also if your algorithm is perfect and will never be misled by extraneous info.

That's a good insight. I had always thought the key to good algorithm / data structure design was to use all the information present in the data set. For example, if you know a list is sorted, you can use binary sort. But perhaps choosing how much of it to omit is key, too. It comes up less often, however. I can't think of a simple example.

> But perhaps choosing how much of it to omit is key, too. It comes up less often, however. I can't think of a simple example.

An example of that is a linked list with no particular sort order. By not caring about maintaining an order the insertion appends or preprends a node and is O(1).

As soon as you have to maintain any additional invariant, the insertion cost goes up. Either directly or amortized.


A real world analogue is detective stories that work by adding irrelevant details and it’s up to the reader to filter down to the truly important details to solve the case.

To generalize it, if figuring out what information is important is more expensive than assuming none of it is, better to simplify.


So it's basically a matter of figuring out what problem context can and should be selectively hidden from the algorithm in order to make it work 'smarter' and not 'harder.' Weird.

The actual better algorithm does use history dependence though. So I found this part of the article misleading.



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: