Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

~depends

Python is still fundamentally a n=1 language, where only one thread can run the interpreter at any time (GIL), like many scripting languages. Thus any gains in performance from using multiple threads has to stem from using external resources (syscalls, calls into C code) -- running pure Python code in multiple threads won't increase performance (but still provide concurrency).

Most libraries that do heavy lifting in native code and so on are written with this in mind, and allow thus ample performance gains, should these operations actually limit performance.

Of course, nothing keeps one from using entirely different means to concurrency, eg. message passing, like the ZeroMQ approach, RPC, shared memory and so on are all possible with Python.

So I'd say it's quite manageable, and that the GIL isn't really a significant limitation for most applications. However, it also means that in many cases process-level parallelism is preferable, eg. in the context of web applications this is the usual approach (and it doesn't have to cost a ton of memory - see uwsgi/pre-forking appservers), besides async.



Everything true, though I would add that Python makes multiprocess parallelism fairly painless with the standard multiprocessing package.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: