I don't get it why people always hated (tracing) GC. Unless you are working on a hard real time systems such as DSP or audio synthesizer, having some kind of GC is always the boon.
Not only you can run code faster in some case (by suspending GC in some critical code and just do plain allocate and resume GC at a later time, and this is also how you make it more real-time compatible), it can also help curb fundamental security issues such as dangling pointers and use-after-free.
GC is also not the blame to the bloated size of your app, Nim [1], for example, is a language with GC/runtime while being lightweight. Another honorable mention would be Haxe [2] for gamedev, where it can generate much more compelling C++ code that the binary size is just slightly bigger to what you normally do with C++ while having much less lines needed to code. And it has a tracing GC.
So I do think GC-enabled language can be like C/C++/Rust. Even if there is a binary and performance difference, it won't be huge. But the way it makes your program safer by a huge margin and your programmer less mentally pain, makes having GC a huge difference.
By the way, even smart pointers/move semantics and ownership are one kind of GC (by leveraging linear/affine logic to ensure resources will not drop out of its controllable/life phase commonly called lifetime), so most of the time what I refer to GC is more specifically tracing GC where mark-and-sweep algorithms that usually need to stop-the-world is indeed a fundamental problem in GC design especially for multi-core platform which is going more and more popular nowadays.
There is definitely a terminology problem. "Automatic memory management" is a more precise term than "garbage collection". It refers to the more user visible trait, and pairs better with the antonym of "manual memory management" which is much more commonly seen. I would encourage people to use "AMM" as a term even if in their coding they do not use it. :-) "AMM" also begs the right questions such as "automatic -- in what sense, exactly?" It also tracks with the primordial C "auto" for stack variables (the automatically managed part of C - yes I am aware modern C++ repurposed it). It might even make some discussions have less talking past each other. This kind of oceanliner ship is hard to turn, though.
There are a few high profile prog.langs and environments like Java/Go/old Lisp stop-the-world/etc. style where (some) people struggle with "GC". These struggles often give AMM itself a bad reputation - while C stack variables (often coupled with what people call "value types" today) are often the way to be very, very fast (yes, because of mostly CPU-private stacks..even so).
Nim is very fast and has many choices in AMM from none to Boehm-Wiser to its own tracing variant to a new extremely low overhead ARC and ORC that have more Rust-ike aspects (but copying in some cases to be safe which is just about 10x easier to use in deviousmeters). With a TinyC/tcc backend Nim yields near Interpreter/REPL-like edit/compile/test cycles. It's really a joy to write code in most of the time. People should look into Nim more.
Not only you can run code faster in some case (by suspending GC in some critical code and just do plain allocate and resume GC at a later time, and this is also how you make it more real-time compatible), it can also help curb fundamental security issues such as dangling pointers and use-after-free.
GC is also not the blame to the bloated size of your app, Nim [1], for example, is a language with GC/runtime while being lightweight. Another honorable mention would be Haxe [2] for gamedev, where it can generate much more compelling C++ code that the binary size is just slightly bigger to what you normally do with C++ while having much less lines needed to code. And it has a tracing GC.
So I do think GC-enabled language can be like C/C++/Rust. Even if there is a binary and performance difference, it won't be huge. But the way it makes your program safer by a huge margin and your programmer less mentally pain, makes having GC a huge difference.
By the way, even smart pointers/move semantics and ownership are one kind of GC (by leveraging linear/affine logic to ensure resources will not drop out of its controllable/life phase commonly called lifetime), so most of the time what I refer to GC is more specifically tracing GC where mark-and-sweep algorithms that usually need to stop-the-world is indeed a fundamental problem in GC design especially for multi-core platform which is going more and more popular nowadays.
[1]: https://nim-lang.org [2]: https://haxe.org