Because it's complicated and largely unnecessary. Emacs already has concurrency (make-<network>-process), what it lacks is multiple Emacs Lisp stacks. But the reality is that having two threads of execution running Lisp is not going to solve any of your performance problems, and it adds a ton of complexity to the rest of Emacs. If you want to do something in the background, fork and talk to it over a pipe. That's concurrent, easy to reason about, and has existed in Emacs for decades.
None of the Unix shells (csh, bash, etc) are multithreaded either. Like Emacs, what concurrency they have comes from fork, exec and setting "listeners" (terminology?) for signals from child processes.
If I were to write a web server in Emacs Lisp (or bash) I would probably want multithreading in Emacs or (bash) but I have never felt the desire to write a web server or anything else that needs multiple threads all running in Emacs Lisp, and I've written many 10s of 1000s of lines of Emacs Lisp.
Although my web browser (Firefox) is multithreaded, I spend orders of magnitude more time waiting for my web browser's UI to stop being unresponsive than I do waiting for Emacs or any of the Unix shells I have used to stop being unresponsive.
Here are some features that to the best of my knowledge are currently impossible:
* Non-blocking save on network drives
* Non-blocking autosave
* Rebuilding tag tables in the background
* Loading and fontifying large files in the background (i.e. concurrent fontification, not just jit-lock)
* Concurrent url fetching and downloads (e.g. updating el-get without blocking) [1]
* Concurrent spellcheck
* Fast scrolling in large buffers (especially complex XML files opened in nxml-mode) without annoying hacks like http://stackoverflow.com/q/4160949
There are plenty more improvements that wouldn't affect me personally. I imagine erc and other networked packages would also benefit from true concurrency.
[1] I just started using el-get on my laptop and I haven't checked this yet. Maybe it somehow works.
So it did not bother you that I claimed that Emacs is just as concurrent as any of the Unix shells are and that you can do most of those things (I do not know enough to say whether you can do nonblocking saves on network drives) in any Unix shell :) :)
Look, concurrency is complicated, and here is not the place for a long exposition about it, but the Emacs of 1995 (and again almost any Unix shell) can do most of what you listed. Often, it forks a program written in C and listens for a signal informing it of the result. M-x man is probably the best example: it forks a program written in C (or a bunch of them connected with Unix pipes) and waits for the C program to fill a buffer with the rendered man page. On modern hardware, the rendering is snappy, but M-x man was written in the early 1990s, when it took many seconds to render some man pages, so it is written such that the user can continue editing (can switch buffers, etc) and when the man page is ready, emacs switches to the buffer containing the new man page.
Actually, forget about the comparison to the shell. (It's misleading unless you are old enough to remember the commands fg, bg and jobs. Nowadays everyone uses the windowing system instead of fg, bg and jobs.) My bad.
Proving you wrong about 3 items on your list sound like too much work, but let me try again to make my basic point.
In Emacs, start a new shell process by saying M-! sleep 10 && echo hello && echo world &
The ampersand at the end is important. If you leave it off, Emacs remains unresponsive for the ten seconds during which the shell process runs.
If you include the ampersand at the end, on the other hand, you can switch buffers and perform unrelated tasks while the shell process is running. Then after ten seconds, when the shell process ends, Emacs shows you a message in the echo area informing you that the shell process is done, and if you switch to the buffer named "Async Shell Command", you can see its output.
Emacs has been able to do that for decades, and it is definitely not what great great great grandparent meant by "multithreading" but it is the only kind of "concurrency" I have ever needed or wanted in Emacs to the best of my recollection -- and when I send an email, search a buffer or look up someone's phone number in my address book, I am using an Emacs command I wrote.
Emacs has been able to do that for decades, and it is definitely not what great great great grandparent meant by "multithreading"
You've been talking to the same guy for this entire thread. (i.e. the "great great great grandparent" is the same as the person you just responded to.)
I have ever needed or wanted in Emacs to the best of my recollection -- and when I send an email, search a buffer or look up someone's phone number in my address book, I am using an Emacs command I wrote.
It seems like you've had a lot of experience with Emacs. Can Emacs of today do any of the things that he listed that are things he wants to improve his workflow? i.e. Instead of proving him wrong about 3 things, can you prove him wrong about 1 thing?
I'm curious about the state-of-the-art when it comes to Emacs, so it would be cool to get some solid answers.
>Instead of proving him wrong about 3 things, can you prove him wrong about 1 thing?
OK. One of the things on his list is, "Concurrent url fetching and downloads". In Emacs Lisp a programmer can call START-PROCESS to invoke wget "asynchronously" (i.e., in such a way that the Emacs process can continue to respond to the user's commands). Moreover, Emacs Lisp includes a way (which I do not remember and cannot easily look up) to arrange for code to be run when the wget process terminates. So obviously this code can read the file downloaded by the wget process, load it into an editing buffer, then bring that buffer (or more precisely a window onto that buffer) into the foreground (or notify the user that it is ready in some other way).
I know enough about Unix to know that START-PROCESS is doing a fork and an exec, and my reading of the source code file /lisp/man.el told me that Unix signals would almost certainly be involved in notifying the Emacs Lisp code when the wget process is done.
START-PROCESS probably does not work on Windows BTW.
ADDED. The above example of concurrency is so simple that it might give the reader a misleading impression about Emacs. So let me just say that although I use OS X, I almost never use Terminal or iTerm or such because I prefer to use Emacs buffers (in shell mode) to interact with Unix shells and other programs with command-line interfaces. If I were to start a long-running computation in one (shell-mode) buffer, I could use Emacs for unrelated tasks while the computation is running.
Some of these exist, and all are possible without threads.
Non-blocking save on network drives; Non-blocking autosave
This requires a memcpy and the aio_* functions. When a save is requested, you make a copy of the buffer in memory (so that what ends up on disk is what the buffer looked like exactly when the save was requested), then you stream this data to disk with aio_write (perhaps making the write atomic by doing a synchronous rename after all the data is on disk). This would probably take no more than a few hours to implement, and most of the complexity is simple to handle with atomic writes and aio_cancel. I think the biggest problem is handling the "userspace" semantics; after-save hooks expect to run after the data is on disk but before the user has changed the buffer in any way. What do you do about this?
(I've done this before. I have an Emacs mode at work that saves to a special centralized database in addition to the local disk at every modification. We just queue the write in before-save hook and tell some other process to read the data from a temp file that we create. It works perfectly, even on Windows!)
Rebuilding tag tables in the background
This is easy and eproject already does it. You run etags in a background process and when it's done, point Emacs at the new tags tables. Since the tags tables don't do anything until a lookup is requested, this is completely asynchronous.
Loading and fontifying large files in the background (i.e. concurrent fontification, not just jit-lock)
This you can't do unless you can guarantee that matchers are bounded and generate correct results no matter what the boundaries are. I don't know of any programming mode that could be made to meet this invariant easily; consider the case:
/* this code is bad:
int i = 0; */
If we had two threads fontifying this block, and the bounds were set to thread 1 = line 1, thread 2 = line 2, the results are going to be wrong. The first line will be treated as a comment and the second line will look like a normal variable declaration because getting the correct syntax information requires analyzing the entire first line. Any language with multiline constructs (string literals, preprocessor directives, comments, etc.) will have this problem, thus making parallelization Not Easy. If font-locking is slow, profile it and fix the algorithm, but don't expect speedups for free. Parsing programs that are invalid (as most are while being typed; you type characters, not tokens) and guessing what the user "means" is hard.
Concurrent url fetching and downloads (e.g. updating el-get without blocking).
Fork curl or wget. Display the result when done. Or, implement HTTP from scratch; network IO is already nonblocking.
Concurrent spellcheck
This is already implemented this way; a pipe is opened to aspell and words are piped to aspell as necessary. If parsing the results is too expensive and you don't care about interactivity, wait for bigger batches of data back from aspell. All IO to other processes is asynchronous and can be made synchronous with special calls like accept-process-output.
Fast scrolling in large buffers (especially complex XML files opened in nxml-mode)
Thank you, especially for your notes on aio_. As promised, I sent you something by email, but know that your comment is worth more than that.
Fast scrolling in large buffers (especially complex XML files opened in nxml-mode)
I think nxml-where.el is guilty for the slow scrolling in large xml files. I wanted to offload that to another thread, but a simpler solution is to only update the path post-scroll.
It's also worth pointing out that Emacs and Emacs Lisp need not be multithreaded in order to implement specific asynchronous operations. If aio did not exist, there's nothing stopping you from creating a thread (from C), doing your blocking save there (and nothing else), and sending a note back to the lisp side when it's done. It's generic threading that adds complexity, because that would require all state-changing operations to be threadsafe, which can be very difficult to "bolt on" to a complex system like Emacs. (Not to mention locking overhead; see Python's GIL debate.)
A good example of this approach is the 0MQ messaging library. While it is internally threaded, it does not care what the application does about thread safety; there is no way that you can ever "see" its threads. It simply requires that you pass ownership of blocks of memory to the library, thus avoiding locking and issues of thread safety while still using threads.
Gnus makes Emacs unresponsive while it performs network IO. Threads would solve this problem, and salvage the usability of a really nice piece of software.
https://github.com/technomancy/emacs-starter-kit