Hacker News new | past | comments | ask | show | jobs | submit login

This isn't about Amdahl's law; this is about the fact that many problems are simply hard. Let's look at some examples.

One common example is the need to split a larger task into smaller subcomponents, but where all of the components combined need to obey some global constraints. A specific example: we're compressing a video frame, the result must fit within a certain size, and we can't parallelize over multiple frames for latency reasons. This means we need to split the frame into chunks, but somehow all the chunks have to communicate with each other in real-time, as they work, in order to ensure they obey the global constraint. Do you have a "master" thread that manages them all and makes decisions? Do you use some sort of algorithm where they act as separate agents, asking each other questions? Suddenly this is a lot more complicated than what you started with.

Another example is a search algorithm. Whether you're performing a minimax search of a game tree or simplex optimization, you're implementing algorithms that are normally not parallel. Parallelizing minimax is actually incredibly difficult and requires making a lot of hard decisions about where you branch threads, where a thread gives up, how to avoid duplication, and so forth. Fancy programming tools can give you features like thread-safe hash tables that help you, but they don't solve the actual problem. See any multithreaded chess engine for an example of this problem. Note particularly that the engines don't get perfect speedup from multithreading -- but it's NOT because of Amdahl's Law! It's because the searches between threads unavoidably duplicate at least some work.

"Moving data, operating on it" would be grossly oversimplifying real-world, complex programs like these, and there's nothing a functional language would do to "trivially parallelize" them. Dependencies in calculations are often so tangling that you cannot naively parallelize them without making dramatic, possibly sacrificial, changes.

Tools like FP can be useful, but they don't solve problems of inherent complexity. There is no silver bullet.




I have a little hunch that any parallelism "answer" is going to be derived primarily from ever-more-grand exploitation of its strengths, not patching of its weaknesses.

We know that in a formal reasoning sense, an asynchronous system is less powerful than a synchronous one, because you can implement any async design in a sync system, but not the other way around. Yet in practical use, asynchronous, "hard" decoupled systems are the ones that scale better and are easier to maintain. So we keep telling ourselves, "decouple everything" and have invented a gross array of patterns and techniques that add "decoupling pixie dust."

We know this, but usually don't extrapolate it to its natural conclusions: we're using brute-force engineering to attack our needs from the wrong direction - trying to break arbitrary sequential computations into parallel ones via sufficient smartness, instead of folding parallel ones back into sequential ordering where it produces wins, and breaking out the "sequential toolbox" only if the problem really taxes our abilities of algorithm design, as in the cases you have outlined.

Of course, adhering to parallel designs from the beginning is hardly a silver bullet either, but there are real wins to be had by exploring the possibility, and a good number of them can be experienced today simply by working that style into "ordinary" imperative or functional code with abstractions like actors and message-passing.


Correct me if I'm wrong, but in a formal sense you get the synchronous system inside the asynchronous one trivially. So the two are of equivalent power.

If you imagine that in the case of threading systems, you can implement a multi-threaded system with your own green threads library, which is async on top of sync. You can get sync on top of async with native threads by simply using one thread. Or, if you want to be complicated, by using multiple threads with locks that serialize to a single synchronous ordering.


> We know that in a formal reasoning sense, an asynchronous system is less powerful than a synchronous one, because you can implement any async design in a sync system, but not the other way around.

As someone else pointed out as well, I just don't see it. If components of an asynchronous system can wait on an event you can make it synchronous by driving it with deterministic synchronous events. Imagine a bunch of independent actors that always wait for a clock signal and then processing one piece of data then waiting for next and so on.

So I actually see a synchronous system a restricted case of an asynchronous system.

If you think about it, the world is inherently asynchronous. If you have 2 agents in the real world. They process and do things asynchronously, there is no global event or clock system that drives everything.


When you start talking about latency, you are inherently talking about Amdahl's Law because the question that gets begged is, "can we make amount of work X complete in less time with more processing power Y" ? And in your problem domain of video coding, esp if it is for real-time face to face communication or telematics, latency matters. So Amdahl's Law matters because it dictates the bound on latency.

Doing more work X (making the pie bigger) in about the same time as Y is http://en.wikipedia.org/wiki/Gustafson%27s_Law, latency, like the time of light it is one of those physical universal properties we won't be able to tech our way out of. Latency is inherently a serial problem. What changes when we could try every solution at the same time?

Nearly all hard problems we are solving right now don't need to have exact solution. They can be time bounded or quality bounded or both. What we lack is a clarity in being able to create algorithms that are probabilistic and progressive. Algorithms that refine and converge towards a solution over time. Better languages can help with that.

Here are some examples of probabilistic parallel minimax algorithms.

http://www.top-5000.nl/ps/Parallel%20randomized%20best-first... http://www.math.md/files/basm/y2010-n1/y2010-n1-(pp33-46).pd...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: