Hacker News new | past | comments | ask | show | jobs | submit login
Nuts and Bolts of Multithreaded Programming (2011) (intel.com)
109 points by fspacef on July 2, 2017 | hide | past | favorite | 23 comments



This is required reading as part of ME344: Introduction to High Performance Computing at Stanford.

I think it gives a great high level overview of how parallel programming works. Great starting point for those interested in learning more about HPC systems!


Given that the sustainability of Intel's profit margins are dependent on it, I am surprised they do not put more effort into making parallel programming easier. For example, GHC Haskell has a very promising parallelism/concurrency story with its many available programming models, including software transactional memory and lightweight threads, yet it is Microsoft funding it and not Intel.


Intel Labs did do some work on functional language implementation, including a Haskell compiler (backend): https://github.com/IntelLabs/flrc

I think the focus was on vectorisation rather than task parallelism, though. And it's been abandoned for years.

I'm not sure Intel's sustainability is uniquely dependent on parallel programming. In fact, given that Intel processors have the best sequential performance, one could even argue that it's to their advantage that parallel programming remains awkward.


Maybe the effort just isn't where you're looking? Intel provides the Threading Building Blocks library for C++; some top notch debugging & profiling tools with multi-threading support; very good documentation; and they do quite a bit of community outreach to educate about and promote multi-threaded programming. (I'm not affiliated with them, just grateful for all of that stuff!)


I'm aware of TBB, but IMHO C++ is not the future of parallel and concurrent programming, or at least not the future I am hoping for.


What is one example of haskell being used effectively for parallel execution in a real world scenario?


Haskell is used for critical projects in production at Standard Chartered, Barclays and Facebook. There are also numerous startups. For both parallelism and concurrency, Haskell has a particularly good story. There's now even an O'Reilly book: "Parallel and Concurrent Programming in Haskell"

http://chimera.labs.oreilly.com/books/1230000000929/index.ht...


They are working on a Xeon-optimized version of Torch (https://github.com/intel/torch), presumably in an attempt to get more of the deep-learning community onto Intel hardware


Although it was updated in 2011, the HN title is a bit misleading.

> As Intel puts two cores on a single piece of silicon and as multi-socket systems continue to grow, shared memory systems are going to become the norm.

Clearly this was originally published quite a bit earlier.


Good introductory article for cores, threads, shared memories and all. Would have been good if there were few lines about debugging/ troubleshooting multi-threaded systems


Title mentions multithreaded programming, while the first line of the text (as well as major emphasis through the article) is on parallel programming.

And surprisingly, there is no mention of GPU.

Almost as if they want you to think parallelism can only be achieved through CPUs (cores, and threads) but don't want to admit it in the title.


Of course, every article has bias but I still think it does the title justice as it talks about exposing concurrency via processes/threads.

Also note that it was written in 2011, not sure if GPU based HPC was as common then but I could be wrong.


I think the article structure is quite standard. It first introduces the more general concepts (parallel programming, zooming in on MIMD on shared-memory machines) and then focuses on a subset (multithreaded) while also alluding to other parallel programming techniques (MPI etc).


Can a SIMD system be really considered "multi-threaded"?


SIMD is the classic example of how you don't need concurrency to have parallelism.


Technically, most modern GPUs are SIMT meaning that they aggregate lockstep threads into vector instructions, so yes.


I wouldn't think so. Not sure why you asked though.


There's a question upthread asking why GPUs are not mentioned in "Nuts and bolts of multi-threaded programming". Hence my wonderment.


Then you didn't read the comment upthread fully.

It did acknowledge the title, and contrasted it with the tone of the article (the first half at least) which is way broader minus a mention of GPUs (the basics of parallel algorithms, parallel APIs).


But GPUs aren't SIMD. (In some sense they're almost the opposite.)


In an incorrect sense, yes. :-)

SIMD = Single Instruction Multiple Data, meaning the same instruction being applied to multiple different values simultaneously. That's exactly what GPUs do.


That's exactly what GPUs do in a single thread, but GPUs are also about threading, usually sporting tens, if not hundreds of small cores to effectively allow them to compute that many pixels (or rather shader outputs) in parallel. Otherwise, it wouldn't scale much.

That being said, I understand you only wanted to point out the error in the upper post.


actually they are.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: