I made the mistake of trying to learn Rust while doing async programming.
IMO, when it comes to concurrent, it's a matter of picking your poison:
Threaded Rust: No overhead of a GC, but overhead of context switches and multiple stacks.
NodeJS: No overhead of context switches and multiple stacks, but the overhead of a highly optimized GC. (And I suspect that the GC can do tricks like run when the process is waiting on all tasks.)
Only if your application can limit the number of threads to the number of physical cores.
IE, of you're doing a web server with a thread or process for each incoming web request, you're blocking and context switching. If you have to have locks, you're also blocking and context switching.
This is why async programming models are common, they move the logic of blocking and context switching into the language and runtime, where the compiler can juggle more concurrent tasks in a single thread. It's just harder to do in Rust because, to oversimplify, things that are in stack memory in a threaded environment are now on the heap. In C#/NodeJS, this difference is transparent, but in Rust it's not.
IMO, when it comes to concurrent, it's a matter of picking your poison:
Threaded Rust: No overhead of a GC, but overhead of context switches and multiple stacks.
NodeJS: No overhead of context switches and multiple stacks, but the overhead of a highly optimized GC. (And I suspect that the GC can do tricks like run when the process is waiting on all tasks.)
Some real data would be very interesting.