Async/await makes a lot of asynchronous code easier to write, which is great, but it amplifies some of the mistakes people were already making. The two examples in the post are:
- Developers won't anticipate that accessing a task's Result property can introduce a deadlock.
- Developers won't anticipate race conditions due to a concurrent SynchronizationContext.
The problem from the first example actually does happen quite often. There are plenty of stackoverflow questions and blog posts about it. IMO there should be compiler warnings whenever you use Task.Result or Task.Wait. Actually I would prefer compile errors instead of just warnings, and for the implementations of Result and Wait to just be `throw new WHAT_IS_WRONG_WITH_YOU_DO_NOT_BLOCK_ON_TASKS_AAARGH_EXCEPTION()`, but that might be a bit harsh.
The problem from the second example is not really an issue with await/async as opposed to a deeper language issue. Any concurrency mechanism in C# will be vulnerable to the fact that you can define and access shared mutable variables (including the solution proposed in the post).
Yield, foreach, LINQ all have many similar problems, compared to a for loop, but nonetheless greatly increase productivity. Given a little time, the devs will learn to use async properly.
Yes, I agree. All async/await seems to be is an abstraction around coroutines. Which is a little baffling, because coroutines are already a fairly standard way to solve this problem, so why introduce this weird async/await syntax and pretend that it's magic to solve "concurrency" in general.
I'm sure I just dont understand it, but hearing about leaky abstractions like he describes in the article just makes me that much more determined to never touch them. Too much magic, expertly hidden inside a closed .net runtime.
I really like Grand Central Dispatch on iOS and OS X. You specify a task to run on a background queue or a main queue by passing a block of code. This block of code gets executed while the rest of the program continues. You can have completion blocks so you can respond to the completed task. The syntax is a little weird at first, but it's pretty powerful and I find it to be very explicit. Async/Await always confused me a bit.
async/await is there to reduce the massive amount of boilerplate code you need to write when doing async. Of course there will be corner case bugs or weird behaviors sometimes, but that doesn't mean it's all bad. If async/await works for most cases, then it's something that's beneficial for a software developer like me.
I agree, mostly. I really like async/await but I didn't dare touch it until I read the articles that explain what kind of state machine the compiler constructs and how the synchronization context works. I doubt the average developer would go through that trouble.
I think it's great if you follow standard design patterns but it could be a real source of problems for inexperienced programmers once a race condition emerges.
Having recently learned Elixir, I really wonder why the C# people decided to add async/await rather than coroutines (like Erlang processes or Goroutines).
It feels to me like async/await is the malloc of concurrent programming, and coroutines are the garbage collection of concurrent programming. You hand in a little performance in exchange for a lot less complexity. Go has shown you can also do this without only-immutable data.
Mainly because the CLR and Windows do not support segmented stacks, and supposedly supporting them would be almost impossible while preserving interop with native code.
Without segmented stacks you have to conserve threads, which basically leads to Task and async/await.
Edit: Almost forgot a important part: your COM thread is hugely important to how Windows UI pumping works. Async guarantees what thread you get to run on, where coroutines often just say, "you get the thread that you get."
C# await is a low level level construct that you can use to efficiently implement coroutines. The opposite is not true. Anders&Co designed a tool that has some rough edges (compared to say F# async computation expressions), but compiles to a small amount of low overhead Task operations.
"By studying the output of the TraceThreadId method we see that in ASP.NET/GUI it’s the same thread that enters ReadTask and that exits ReadTask ie no problems. When we run it as a Console application we see that ReadTask is entered by one thread and exited by another ie readingFiles is accessed by two separate threads with no synchronization primitives which mean we have a race-condition."
This is not entirely true - the code as written does not have a race condition because the two accesses are run sequentially. Accessing the same variable by two separate threads with no synchronization primitives is actually okay if those two threads never run in parallel. Now, if you called many ReadTask()'s in a row and you had thread_pool > 1 (as in the GUI/ASP.NET application), then you would have a race condition. But if you're accessing a shared variable from a multithreaded context that should be somewhat obvious. It would depend on the programmer's understanding of the async/await paradigm, which I think is the author's point. ;)
I continue to find using explicit "Task<T>" types directly more intuitive. I've wedged in ASync/Await here and there to see if I felt good about it, but I never did. I would always find declaring a Task and attaching to it's ContinueWith (or whatever I needed ) to be more to my liking.
ContinueWith is fine for simple flows, but things fall apart when you need to perform multiple async operations with the same error handling. You either repeat yourself or you put the error handling code into a lambda and make sure you always pass it along.
Assuming you just want to propagate exceptions, you have to mess with TaskContinuationSource (I.e, promises) and be damn certain you have wired it into all the right places.
Async/await makes error handling much more uniform and "obviously correct" than stringing callbacks together.
Exception propagation is one area I have to agree. Still, there was no reason to invent whole new language semantics for that. As far as simple flows, I'll disagree. The Task Library allows for normal workflow operations and your decisions are much more (to me) explicit when you see "WaitAll", "WaitAny" chaining things together.
"WaitAll" and "WaitAny" have their non-blocking counterparts, "WhenAll" and WhenAny", which allow you to express the same logic/structure without blocking a thread.
Nesting callbacks (ContinueWith) gets messy very quickly, and then your program structure and semantics are hidden behind masses of arcane ceremony.
Performing async calls dependent on previous async calls is hard enough to do correctly using ContinueWith, but try using it in a loop.
await isn't an alternative to TPL/Tasks, it's a feature that improves a certain common use case of TPL/Tasks. "await foo" means (roughly[1]) "yield and schedule the current continuation to be run on completion of foo". What really requires language support is the "current continuation" bit, which starts to become tricky/convoluted to express directly once control flow comes in.
For any use case other than this there are other tools, such as WaitAll/WaitAny as you say.
I make it a rule never to question Eric Lippert, and I won't do so here. That said, right, the current continuation bit requires IL foo, but as the ones writing the code, there is always a way besides needing the current continuation bit.
And if you're going between languages much at all, Tasks are a lot more like promises and other paradigms in other languages. Whereas the whole await business is just... weird.
Because blocking on tasks is a deadlock waiting to happen [0], and creates UI hiccups when the task takes longer than 16ms to complete.
In other words, the conditions you must satisfy for using Task.Wait or Task.Result are very complicated. They depend on and create global constraints of the entire program, and so you should avoid using them whenever possible.
My mistake and too late to edit. I don't use "await" and I misread this at first. Actually the classic tasks way would be to add these to an array of tasks, and then "WhenAll" for the continuation.
yes it's easier to fire and wait for async tasks in parallel - but I specifically mentioned processing serial async responses, i.e. sequentially like a normal for loop.
isn't this whole discussion predicated on not wanting to block? if you don't care about that there's no need to call a Task-returning method in the first place.
The solution to deadlocking is using async all the way down - i.e. don't use .Result.
you can also tell C# not to resume on the same context after an async call by using .ConfigureAwait(false) on the tasks, thus preventing two locks on a context, and the deadlock.
The author states two threads accessing a single resource is a race-condition, but that's only true when the threads are competing for that resource - in the case outlined in the article, it would be sequential, and only one thread would be accessing the resource at one time. No race condition.
I think it's not too wise to think of async/await as a special coroutine which never uses multiple threads, but as an easy way to write synchronous code and have it execute asyncronously. If you need fine control over what threads are used, maybe you're better off controlling it manually.
Is it really a problem that you have to be explicit in where you begin the top of the async/await operation? You can't just start the calls at a random point in code where you don't understand the threading situation and expect to get consistent results.
C#'s equivalent of GCD is, very roughly, Task. You can create tasks that compute results on the thread pool, give them continuations to run after, etc.
Async/await is an abstraction on top of tasks, where you write code with do-while loops and try-catch and it gets translated into GCD-esque continuation-using code. It lets you write the asynchronous code in an imperative style, so it looks more like the rest of the code you write in C#.
The analogy to COM apartments is completely to the point. In fact, if you look at the PPL header files for C++/CX you will notice that the PPL system explicitly refers to COM apartments. Since IAsyncAction and friends can be used from C++/CX and .NET alike they are actually sharing very much of the same principles.
C# already has basic support for computation expressions which they implemented to support LINQ and later extended for DLR, but I cant see why they couldnt extend it further for async tasks.
- Developers won't anticipate that accessing a task's Result property can introduce a deadlock.
- Developers won't anticipate race conditions due to a concurrent SynchronizationContext.
The problem from the first example actually does happen quite often. There are plenty of stackoverflow questions and blog posts about it. IMO there should be compiler warnings whenever you use Task.Result or Task.Wait. Actually I would prefer compile errors instead of just warnings, and for the implementations of Result and Wait to just be `throw new WHAT_IS_WRONG_WITH_YOU_DO_NOT_BLOCK_ON_TASKS_AAARGH_EXCEPTION()`, but that might be a bit harsh.
The problem from the second example is not really an issue with await/async as opposed to a deeper language issue. Any concurrency mechanism in C# will be vulnerable to the fact that you can define and access shared mutable variables (including the solution proposed in the post).