It's very memory efficient, everything compiles of the box to WebAssembly (freestanding or wasi), the resulting code is compact and fast, and it can take advantage of the latest WebAssembly extensions (threads, SIMD, ...) without any code changes. If you are using a webassembly-powered cloud service and not using Zig to write your functions, you are wasting money. Seriously.
Beyond the language, Zig is also a toolchain to compile C code to WebAssembly. When targeting WASI, it will seamlessly optimize the C library for size, performance or runtime features. I used it to port OpenSSL, BoringSSL and ffmpeg to WebAssembly. Works well.
Also, Zig can generate WebAssembly libraries that can then be included in other languages that support WebAssembly. Most of my Rust crates for WebAssembly are now actually written in Zig.
It's also supported by Extism, so can be used to easily write safe plugins for applications written in other languages.
If you don't mind, since you have experience targetting WASM with both Rust & Zig, what advantages does Zig have over Rust in this particular use case?
Are the memory safety guarantees that Rust offers over Zig not as important or critical when targeting WASM?
I've been interested in checking out Zig for a while now.
Your link is really interesting, but it shows that Zig was the most popular language for writing games for a WebAssembly-based fantasy console with constrained resources, not that it's the most popular language for WebAssembly-based games overall.
My understanding is that Zig has all of the power and modernity of Rust, without the strictness and borrow checker. Unlike Rust, it also has powerful compile-time evaluation and custom allocators, and probably more improvements I'm not familiar with (in Rust you can effectively emulate custom allocators, but you have to rewrite every allocating structure to use them; or you can use nightly, but most third-party library and even some standard-library types don't support them).
I also heard someone say "Zig is to C what Rust is to C++". Which I interpret as, it's another maximum-performance modern language, but smaller than Rust; "smaller" meaning that it has less safety and also abstraction (no encapsulation [1]), but less requirements and complexity.
Particularly with games, many devs want to build a working prototype really fast and then iterate fast. They don't want to deal with the borrow checker especially if their code has a lot of complex lifetime rules (and the borrow checker is a real issue; it caused evanw to switch esbuild to Go [1]). In small scripts with niche uses, safety and architecture are a waste of effort, the script just has to be done and work (and the latter is only partly necessary, because the script may not even be fed enough inputs to cover edge cases). Plus, there are plenty of projects where custom allocation is especially important, and having every type support custom allocation is a big help vs. having to rewrite every type yourself or use `no_std` variants.
>My understanding is that Zig has all of the power and modernity of Rust, without the strictness and borrow checker.
This is an oxymoron :) The strictness and borrow checker are part of the power and modernity of Rust.
But even apart from that, Rust has automatic value-based destructors (destructor follows the value as it's moved across scopes and is only called in the final scope), whereas Zig only has scope-based destructors (defer) and you need to remember to write them and ensure they're called exactly once per value. Rust has properly composable Option/Result monads, whereas Zig has special-cased ! and ? which don't compose (no Option of Option or Result of Result) but do benefit from nice built-in syntax due to their special-cased-ness. Rust has typed errors whereas Zig only has integers, though again that allows Zig to have much simpler syntax for defining arbitary error sets which would require defining a combinatorial explosion of enums in Rust.
Of course from Zig's point-of-view these are features, not deficiences, which is completely understandable given what kind of language it wants to be. And the other advantages you listed like comptime (Rust's const-eval is very constrained and has been WIP for ages) and custom allocator support from day 1 (the way Rust is bolting it on will make most existing code unusable with custom allocators, including parts of Rust's own standard library) are indeed good advantages. Zig also has very nice syntax unification - generic types are type constructor functions fn(type) -> type, modules are structs, etc.
I hope that one day we'll have a language that combines the best of Rust's strictness and Zig's comptime and syntax.
> Rust has properly composable Option/Result monads, whereas Zig has special-cased ! and ? which don't compose (no Option of Option or Result of Result)
I've found that types like these normally come up in generic contexts, so the code I'm writing only usually deals with one layer of Option or Result until I get to the bit where I'm actually using the value and find out I have to write "try try foo();". That said, I think this sort of thing will do it:
Yes, and I don't want a backward-incompatible Rust 2.0 either, but the slowness of stabilizing ! (named for when it's going to be stabilized), specialization, TAIT, const-eval, Vec::drain_filter, *Map::raw_entry, ... is annoying. Also the lack of TAIT currently causes actual inefficiency when it comes to async traits, because currently async trait methods have to return allocated virtualized futures instead of concrete types. Same for Map::raw_entry, without which you have to either do two lookups (`.get()` + `.entry()`) or always create an owned key even if the entry already exists (`.entry(key.to_owned())`).
If you think, that's bad, look at C/C++ standardization bodies, where stuff is eternally blocked because ABI compatibility.
---
Problem is lots of implementation things are vying for inclusion. And many thing people want aren't safe or block/are blocked by possible future changes.
For example default/named arguments were blocked by ambiguity in parsing when it comes to type ascription iirc. And not having default arguments makes adding allocator quite more cumbersome.
Plus Rust maintainers are seeing some common patterns and are trying to abstract over them - like keyword generics/ effect system. If they don't hit right abstraction now, things will be much more harder later. If they over abstract, its extremly hard to remove it.
---
Slowness of stabilizing never type (!) and specializations has to do with the issues they cause mainly unsoundness and orphan rules issues, iirc I haven't checked them in a while.
Also none of the common knowledge around traditional data-structures and algorithms work with Rust anyways. One needs to dance like a ballerina with their hands and feet tied.
I retort that almost all of the concurrent data structures are obnoxious to represent in Rust.
A lock-free ConcurrentHashMap, for example, is by no means a straightforward data structure in a non-GC language. Even if you somehow dodge Rust's pedantry, you still have to figure out who owns what, who pays for what and when they pay for it--and there are multiple valid choices!
Non-GC allocation/deallocation in concurrent data structures probably still qualifies as a solid CS problem.
(And, before you point me to your favorite crate for ConcurrentHashMap, please check it's guarantees when one process needs to iterate across keys while another process simultaneously is inserting/deleting elements. You will be shocked at how many of them need to pull a lock--so much for lock-free.)
This is partially why allocation is left to the caller (concurrent data structures are often intrusive), have single consumers (made lock free using separate synchronization) or only have lockless guarantees not obstruction freedom.
I think the obnoxious part in Rust is doing intrusive and shared mutability parts of data structures. Having to go between NonNulls, Options, Pin, and Cell/UnsafeCell is not a pleasant experience.
I never understood this take. You shouldn't be heap allocating each node in your linked list anyways. It's trivial to convert the pointer fields to indexes and have each nodes live in a `Vec` unless you need it to be intrusive. You'll get better performance anyways because you're not doing a pointer deref and blowing your TLB up every time you traverse the list.
As someone who used C as main language, I've switched to zig. It's the only language that tries to be "better C", and not another C++. Comptime being like Nim where it's not entirely own language is also plus. I'd say it excels at general purpose system programming, and especially if you need to work with memory in detailed way (rust makes this very annoying and hard).
- comptime, so writing compile time evaluating code, without introducing its own meta language like macros or templates.
- very solid build system (the build config file(s) are written in zig so you dont have to learn another language for the build system (looking at you, makefile)) that has crosscompilation builtin (with one compiler flag)
- language level errors, like, errors as first class citizens. forces you to handle errors, but without much mental or syntactic overhead (you can re-throw them with `try myfunction()`), also results in a unified interface
- no implicit conversions
- looks high-level (modern sytnax that is easy(ish) to parse) but as low level (or lower) than C, no abstractions that hide details you need to know about when programming (similar to C)
- C interop, so you can just add zig source files to a C project and compile it all with the zig toolchain. Zig can also parse c headers and source files and convert them, so you can include c headers and just start calling functions from there. For example, using SDL is as simple as pointing the toolchain to the SDL headers and .so file, and the sdl headers will be translated to zig on the fly so you can start with SDL.SDL_CreateWindow right away.
Just to name one: compile time code execution. It eliminates the need for a separate macro language and provides Zig zero cost generic types.
Not to mention memory leak detection, crazy fast compilation times, slices and a built in C compiler so your code can seamlessly integrate with an existing C code base (seriously no wrapper needed).
Zig is really really awesome. The only thing holding it back from mass adoption right now is that it's not finished yet, and that shows in the minimal documentation. That said, if you're comfortable tinkering, it's completely capable of production uses (and is already used in production in various companies).
This is just not true and it's the reason #1 I am not using Zig.
To give you some numbers, ZLS is a reasonably sized project (around 62k LOC of Zig) and on my very beefy machine it takes 14 seconds to build in debug mode and 78 seconds to build in release mode.
Because of the "single compilation unit" approach that Zig uses this means you are paying for that very same time regardless of where you modify your program.. so basically clean rebuild time is equal to simple modification time.
As a comparison my >100k LOC game in Rust takes less than 10s to build in release mode for modifications that happen not too far down the crate hierarchy.
So yeah, be for whatever reason you want (LLVM, no incremental builds and so on) as for today Zig is not even close to having "crazy fast compilation times".
True, current compilation times with Zig are not yet optimal. We're getting there though. As our own custom backends become complete enough we will be able to enable incremental compilation and the aim is for instant rebuilds of arbitrarily large projects.
Possibly the thing that makes C be C is that there is only one C. It is the single lingua franca of "one notch up from assembly". I would argue any language that wants to be a better C has to accept the challenge of being available on any platform, existing or future, any architecture, suitable for any bare metal use case, and it has to want to be the single obvious go-to choice. That's what it means to step into the ring of "better C" candidates. A lot of languages might offer pointers, manual memory allocation, no runtime, etc. That's cool and that gets you close to the space. But if you want to be a better C, then the bar is much higher: ubiquity.
IMHO one of C's strong points is the ubiquity of non-standard compiler extensions, C with Clang extensions is much more "powerful" than standard C, personally I see C as a collection of closely related languages with a common standardized core, and you can basically pick and choose how portabel (between compilers) you want to be versus using compiler-specific extensions for low level optimization work (and in a twisted way, Zig is also a very remote member of that language family because it integrates so well with C).
> "one notch up from assembly"
C is actually a high-level language, it's only low-level when compared to other high-level languages, but much closer to those than to assembly. Arguably Zig is in some places even lower-level than C because it's stricter where C has more "wiggle room" (for instance when it comes to implicit type conversions where Zig is extremely strict).
> being available on any platform, existing or future, any architecture, suitable for any bare metal use case
Zig has a pretty good shot at that requirement, it has a good bootstrapping story, and for the worst case it has (or will have) a C and LLVM (bitcode) backend.
For me personnally, Zig will most likely never completely replace C, but instead will augment it. I expect that most of my projects will be a mix of C, C++, ObjC and Zig (and Zig makes it easy to integrate all those languages into a single project).
1. It's typically at least as fast as C, unlike C++/Rust
2. You can do type introspection (and switching) during compile-time, and it's not just some stupid TokenStream transformer, you really have type information available, you can do if/else on the presence of methods, etc.
4. Types are first-class values (during comptime), so any function can take or return a new type, this is how you get generic types, without (syntax/checking) support for generics.
5. You can easily call anything which does not allocate in these comptime blocks.
6. There's @compileError which you can use for custom type assertions -> therefore, you have programmable type-system.
7. It's super-easy to call C from Zig.
8. This is subjective: You don't feel bad about using pointers. Linked data structures are fine.
TBF, Zig with the LLVM backend may be faster than C or C++ compiled with Clang out of the box just because Zig uses more fine-tuned default compilation options. That's true even for C or C++ code compiled with 'zig cc'.
But apart from that the performance should be pretty much identical, after all it's LLVM that does the heavy lifting.
That is a big may, given that it depends pretty much on what is being done in terms of code, and compile time execution (C++ and Rust), and many of the same compiler knobs are also available for C++ code, if not even more.
And was you point out, at the end of the day it is the same LLVM IR.
That was the point I was trying to make, Zig isn't inherently faster than the other three. It just uses different default compilation options than clang, so any gains can be achieved in clang-compiled C or C++ too by tweaking compile options (and maybe using a handful non-standard language extensions offered by clang). Other then that it's just LLVMs "zero cost abstraction via optimization passes" and this works equally well across all 4 languages.
Or: the "optimization wall" in those languages is the same, only the effort to reach that wall might be slightly different (and this is actually where Zig and its stdlib might be better, it's more straightforward to reach the optimization wall because the Zig design philosophy prefers explicitness over magic)
I understand that anytime someone brings benchmarks out, the next response points out that benchmarks are not real world use cases. Nonetheless, they are data points, and your claims are against the commonly accepted view of C being roughly as fast as C++ and Rust. If you have absolutely no data to back it up, you shouldn't expect anyone to believe you.
I don't need any data to say that Zig is typically faster than Rust because I know that Vec<T> will do a shitload of useless work when it leaves the scope, even if the T does not implement Drop. You can do vec.set_len() but that's unsafe, so... Typical Rust is slower than typical Zig.
Zig does not do any work because it does not have smart pointers and it does not try to be safe. It tries to be explicit and predictable, the rest is my job.
BTW: This is very real, I've been profiling this and I couldn't believe my eyes. And even if they fixed it already there would be many similar footguns. You can't write performant software if you can't tell what work will be done.
You are comparing different memory management strategies, not languages features. All of those languages (C, Zig, C++ and Rust) give you enough choices when it comes to memory management.
You can decide to not use frequent heap allocations in Rust or C++ just as you can in Zig (it means not using large areas of the C++ or Rust standard libraries, while Zig definitely has the better approach when it comes to allocations in the standard library, but these are stdlib features, not language features).
That "just" seems to mean "you'll need to take 10x the time to write the equivalent Rust/C++ program". What a language makes easy to accompish matters.
Yes, it's true that you can do arenas in Rust, but they are harder to use in Rust, due to borrow-checking. For example you can't store arena in a struct without lifetime.
So in a language which actively makes your life miserable in the name of safety, you will likely just use Vec<T> because it's easy. Hence, back to the previous point - Vec<T> is slow -> your code is slow and it's not even obvious why because everybody does that so it looks like you're doing it right.
I'm a Zig fan and all, but you should probably check if your C/C++/Rust code uses the equivalent of "-march=native", that might already be responsible for most of the performance difference between those and Zig - they all use the same LLVM optimization passes after all, so excluding obvious implementation differences (like doing excessive heap allocations) there isn't any reason why one would be faster than the other.
The point was typical, typical Rust code is using Vec<>, typical Zig code is using arenas. Typical Rust code is using smart pointers, typical Zig/C code prefers plain, copyable structs.
It is 100% possible to use such style in Rust/C++ (and then the performance will be same or maybe even in favor of rust, it might be the case) but people usually do not do that.
Zig is in the same ballpark as the ones mentioned above, so flohofwoe is right. But nevertheless I do think that cztomsik's point also still stands: how hard it is to make something fast will end up impacting the performance characteristics of the average program and library out there, and Zig does make it easier to write performant code than some other languages.
> Zig can and does win plenty of those benchmarks, but ultimately it boils down to who is it that gets nerdsniped into working on a specific challenge.
That was a faulty benchmark with a simple unnatural footgun that the Rust people overlooked. I don't think it supports your claim that benchmarks typically boil down to who is it that gets nerdsniped into them. Sure, it can happen, it happened at least once, but in general?
Zig has a decent chance of being an actual embedded device (ie. no operating system) programming language.
In my opinion, Zig seems likely to grow the necessary bits to be good at embedded while Rust is unlikely to figure out how to shrug off the clanky bits that make it a poor fit for embedded devices.
However, I'm a personal believer that the future is polyglot. We're just at the beginning of shrugging off the C ABI that has been preventing useful interoperability for decades.
Once that happens, I can use Rust for the parts that Rust is good for and Zig for the parts that Zig is good for.
> clanky bits that make it a poor fit for embedded devices.
What do you see as "clanky bits that make it a poor fit" for such a broad range of stuff as "embedded devices" ?
Embedded goes at least as far as from "It's actually just a Linux box" which is obviously well suited to Rust, to devices too small to justify any "high level language" where even C doesn't really fit comfortably. Rust specifically has no ambitions for hardware too small to need 16-bit addresses, but much of that range seems very do-able to me.
There are lots of systems where memory changes hands. Game programming, for example. You allocate a big chunk of memory, you put some stuff in it, you hand it off to the graphics card, and some amount of time later the graphics card hands it back to you.
Rust gets VERY cranky about this. You wind up writing a lot of unsafe code that is very difficult to make jibe with the expectations of the Rust compiler. It's, in fact, MUCH harder than writing straight C.
Example clanky bit: slab allocation
You often don't really care about deallocation of objects in slabs in video games because everything gets wiped on every frame. You'd rather just keep pushing forward since you know the whole block gets wiped 16 milliseconds from now. Avoiding drop semantics takes (generaly unsafe) code and work.
Yes, although I could have just as easily said "low-level systems programming" as embedded. I try to keep in mind that when people say "embedded" they may mean anything from ARM Cortex M series to Raspberry Pi 4s.
Rust is really good when it is control of everything. I love Rust for communication stacks (USB, CANBUS, etc.). Bytes in/state in to bytes out/state out is right in its wheelhouse. Apparently Google agrees--their new Bluetooth stack (Gabeldorsche) is written in Rust. When Rust has this kind of clearly delineated interface, it's really wonderful.
However, Rust does not play well with others as my examples point out. To be fair, neither do many other languages (Hell, C++ doesn't play well with itself). However, that's going to be a disadvantage going forward as the future is polyglot.
I’m not really convinced because in Andrew’s livestreams he’s actively uncovered significant stdlib bugs that he is aware of and tables for later.
Hopefully those will all be gone by 1.0, but I doubt it. For now, I cannot consider it a viable alternative to anything for production software. I do hope it will be some day, because it’s a nice language, even if it has a few syntactic warts :)
that said... I feel safer choosing C89 or C99 for certain things due to its extremely wide availability and longevity.
It’s great for it to have competitors, but C and C++ are more like standards and less like one tool with a handful of people working on it.