Hacker News new | past | comments | ask | show | jobs | submit login
Why Zig When There Is Already CPP, D, and Rust? (2017) (github.com/ziglang)
141 points by luu on Oct 14, 2018 | hide | past | favorite | 145 comments



"C++, Rust, and D have a large number of features and it can be distracting from the actual meaning of the application you are working on. One finds themselves debugging their knowledge of the programming language instead of debugging the application itself."

It's an interesting point. My experience with it, though, is that if the language is too simple, the complexity gets pushed off into the application. The result is "boilerplate", a lot of code that gets routinely cut&pasted into applications.

The thing about complexity in the language, on the other hand, is once you do learn the language, that skill follows you with every app you work with. With complexity pushed to the app, you have to re-learn it with every app.

A screwdriver can do a lot of things - chisel, hammer, scraper, prybar, punch, paint stirrer, shim, etc. But I prefer having a set of tools designed to be chisels, hammers, etc. My projects come out a lot nicer.


It's an interesting point. My experience with it, though, is that if the language is too simple, the complexity gets pushed off into the application. The result is "boilerplate", a lot of code that gets routinely cut&pasted into applications.

This is a frequent complaint I hear about Go: they made the language too simple in certain places. People also tend to cite Rich Hickey's distinction between "simple" and "easy".

So the question about Zig becomes one of what exactly was kept simple, and what was allowed to become complex?

I think the world would benefit from a Hyperpolyglot-style comparison here. Rosetta Code is too much of a mess; ideally, I want to see a thoughtful side by side comparison of the same program written in several languages, with commentary.


Zig might get away with it by restricting itself to a narrow niche where this kind of simplicity is advantageous. Go tries to eat a much larger piece of the pie, and the deficiency is felt more as a result.


There's a profound wisdom to knowing the niche you're in, and Zig does seem to have that. The applications part of the stack is saturated with options, all trying to encroach on each other, but when it comes to the genuinely low-level, unsexy stuff...the viable options are all pretty old languages, and languages that patch over those old languages.

And with respect to discussion of boilerplate, you can, of course, always have a "Zig++" language that patches over Zig. By cutting out its own metaprogramming functionality, it's much more straightforward to generate useful Zig source. There are a lot of advantages to restraint in the design, and Go does share that, too.


I think many Go developers would consider this a feature (I do, anyway). A little boilerplate is better than a complex language. I say this with a background in C++, Java, and Python. I’m a fair bit more productive in Go and my code tends to be of higher quality (less headspace devoted to language complexity and more room for application complexity).


You're obviously familiar with D, but in response to your post defending D's complexity, let me offer a different take. You are free to ignore a lot of that complexity with D. I don't use most "features" of the language, and in a lot of cases, I don't even know how they work. My code is a lot like C code. I don't mess with attributes, templates, CTFE, etc. in most of my programs.

You can use a style in D that is even simpler than Go if you want, but it's nice to have the extra features for the times that they're handy. Just because the features are there doesn't mean it's a good idea to use them - and D allows you to program that way.


This attitude is fine for personal projects, but it does not scale when working with other people.

Even in a two person project, one person using a language feature will inadvertently force the other person to learn how that feature works. And they will not even know if the feature was appropriate to use until they have learned it.

With this small amount of reasoning we are back to the tired old argument of "Which is better? Ease of expression or ease or learning?"


> This attitude is fine for personal projects, but it does not scale when working with other people.

Even if true, so what? "Personal projects" don't count?

But honestly, this is one of the worst programming language arguments there is. Any feature can be misused. There has to be discipline - full stop. Restricting the language is not the answer because that just leads to worse code, not better. Decades of C code demonstrates that lack of features does not translate into good programming practice.


C has clearly other issues than just a lack of features.


You could very well argue, however, that many of the issues with C are symptoms of a lack of some specific features. To work around that, you need just as much discipline (to avoid, for example, buffer overflows) as you would need not to misuse the features that aren't there.

That's more or less the reason I too don't find "multiple people projects may have problems with discipline" to be a good argument for anything. That's a culture issue and pretty much orthogonal to any technical factor of your environment, including the choice of programming language.


> This attitude is fine for personal projects, but it does not scale when working with other people.

True, but the same is true of Go, which may be surprising to people. Despite its claims, Go didn't solve this problem.

One reason is that larger projects typically start to invent their own abstractions on top of the base language and Go is no exception here, in fact, what with its lack of some common abstractions, built-in support for code generation, frequent use of external add-ons (like go vet etc.), Go very much encourages this. And so you're going to have to learn specifics of larger projects just like with "complex" languages. The solution to this problem is meticulous and somewhat restricted code style standard, but that's a solution that works with "complex" languages just as well, in fact, I'd argue that even better.

The other reason is that the Go language is not just the Go language. That is, what with the lack of abstractions for error handling, code re-use, etc., people end up following a bunch of conventions, styles, patterns and using external tools (like the already mentioned go vet). And so in effect the language becomes "the base" + "lore". This is of course true of any language. With Go and early Java, for example, this becomes more pronounced, since the base part is very small, larger parts need to be taken care of by the "lore". This is why Design Patterns become so popular in the Java world. And this why it's not good to have the base part too small, which is the problem of Go. Of course, the other extreme is equally bad - that is, when the base part is too large, which is the problem of C++, where now "lore" has to take care of deprecating/removing language features.

I feel like the authors of Go decided to do the opposite of what C++ does and thought that since C++ is horrendous, doing the opposite would yield a great result. But unfortunatelly that's not the way things work. You can't just negate someone's bad approach and expect great results.


Are you not doing code reviews in your organisation?


You're assuming the multi person project has no coding standards.

Then again, you are perhaps making a decent argument for using simple languages in multi person projects!


Perl is another language with a lot of features. I tend to appreciate those features, since once you know how to use them they're a huge timesaver and productivity boost. However, the more obscure/clever the feature (or application thereof), the less likely another programmer (your future self included) is going to understand it. And if you're tasked with maintaining old code written by someone (your past self included) who knew/used a different subset of the language's features than what you currently use, you're suddenly going to need to learn that subset.

So yeah. I love feature-rich languages, but caution is prudent, and I can understand why Perl or D or what have you might be too feature-rich.


This is actually exactly what I'm arguing. If you have a language with lots of features, you don't write clever or obscure code. The stuff people do with Perl is because the language allows it, but doesn't include features to do it for you. That's a sign that something is missing from the language.

Consider the Go vs D comparison. D has good support for templates. Occasionally, templates are a big help, so every once in a while you should use them. In those cases, if you're writing D, you add a templated function and everything is concise and readable. In Go, you do one of several things, none of which is concise and readable. Future you will thank you for using D and taking advantage of it's features.

Features support the writing of disciplined code. Lack of features forces hacks, duplication, and crazy workarounds. The latter is what you're seeing when you look at ugly Perl code.


Not only learning the language but also learning any language can equip you with necessary tools as many of the paradigms are interchangeable between different programming languages. That’s why people complain so much about the lack of generics in Go because we are already accustomed with it from other programming languages. Depriving the programmer from “basic luxuries” doesn’t make much sense.


Case in point, any interface heavy C codebase. Function after function of laying out function pointer vtables.


Or not having a good std crypto library: a million of shitty libraries


I don't think that's a very good example, there aren't that many "production grade" crypto libraries out there. I think the root of the problem is that C simply lacks the features necessary to write safe code, it sometimes feel like the language is actively trying to trip you up instead of having your back (implicit casts, weird aliasing rules, lack of generics and in particular "safe" containers like C++'s std::vector, completely implicit ownership semantics etc...). This makes is very difficult to interface crypto code nicely and safely using a pure C API.

That's not to say that C crypto libraries don't suck for other reasons (I've used OpenSSL, I can't really defend that) but the language itself limits what you can do, in the standard library or otherwise. I mean, look at the many flawed or error-prone interfaces in the C standard library itself, I don't think stdcrypto.h would be the panacea.


Out of curiosity, what would be an example programming domain with a lot of interfaces?


gobject library? Gtk uses this, which tries to emulate OOP.


You mean programming domain in general or a codebase in C?

There's a ton in Java, C++ and C#. One example in C is the proton implementation of AMQP.


I dunno. I find myself fiddling a lot with the language in C#, in C++, in elisp. In Go, mostly I just focus on solving my problem. It seems to have strong enough abstractions for me to express solutions succinctly, while being simple enough to not get in the way. I find it extremely pleasant. (I do admit to the less-than-extremely-pleasant chore of copy-pasting various hand-rolled specializations of functions like, e.g., filter.)


I'm just happy that D was mentioned. Yay, D! Also, D's metaprogramming is an utter joy, almost lisp-like, without the craziness of C++ templates. Give it a try!

An intro to one ingredient of the magic of D metaprogramming, compile-time function evaluation:

https://tour.dlang.org/tour/en/gems/compile-time-function-ev...

An example of the kind of introspection available in the D stdlib:

https://wiki.dlang.org/Finding_all_Functions_in_a_Module

This one's really cool, a library that lets you build an entire grammar of your choice and evaluate it at compile time:

https://github.com/PhilippeSigaud/Pegged

Nim's metaprogramming seems quite similar, and I sometimes wonder if perhaps it took some inspiration from D.


Nim appears to have taken a lot of inspiration from D in general, eg UFCS.


I wish more languages had UFCS. It makes for really readable code.


For others unfamiliar with the acronym, it looks like it's Universal Function Call Syntax - en.m.wikipedia.org/wiki/Uniform_Function_Call_Syntax

Lets you call top-level funtions as if they were metnks, with the receiver as the first argument.

For example, these two are equivalent:

    add(x, y)
    x.add(y)

As an aside, I added a similar feature to my compile-to-js lang (http://lightscript.org) and have found it to be a delightful feature. I hope an equivalent passes TC39.


> compile-time function evaluation

Sorry, potentially stupid question incoming: I thought this was pretty standard for c++ optimizing compilers. What makes D's version of this special?


It's not an optimization in this case - it's a mechanism that produces what is semantically a compile-time constant, and can therefore be used in any context in which one is required by the language.

C++ has this with constexpr, but it's not as rich in terms of things that it can do. In D, IIRC, pretty much anything goes.


It is ad-hoc in C++, templates weren't designed for general compile time meta-programming. It took several recent changes to the standard to make it a little more palatable with static expressions and general template improvements.


It means you can run compile time programs without resorting to crazy template metaprogramming


C++ now has constexpr, so that pro is now shared by both C++ and D, somewhat (ignoring templates, D is still more versatile).

That being said, C++ template metaprogramming can be pretty elegant[1] (there was a video where she describes the template functional style, but it seems to have been deleted).

[1]: https://github.com/hanickadot/compile-time-regular-expressio...


constexpr accretes more power with every iteration, but it's still got a ways to go:

https://en.cppreference.com/w/cpp/language/constexpr

https://dlang.org/spec/function.html#interpretation


Eschewing all hidden function calls is a reasonable design choice, but you give up a lot.

No destructors means no smart pointers – and Zig doesn't seem to have a garbage collector, so I guess you need to free() things manually.

No operator overloading means, among other things, no custom numeric-like types – including wrapper types for units (e.g. Feet) unless the language provides separate support for those (e.g. Go does, Zig seemingly doesn't).


It would be nice if a language offered resource aware functions like raml.co, then destructors could be enforced as O(1), etc.


>No operator overloading means, among other things, no custom numeric-like types – including wrapper types for units (e.g. Feet) unless the language provides separate support for those (e.g. Go does, Zig seemingly doesn't).

Most languages survived without that.


Prior to their invention, most drivers survived without airbags in their cars too.

comex just noted that the design decision here has some tradeoffs. One of those tradeoffs is that you can't define wrapper types for units. Such a feature has clear use cases - it's not hard to find instances where unit conversion bugs have had severe consequences. "Most languages survived without that" is not a constructive comment.


That's not what parent said. You can express your maths as function calls.

Operator overloading is often misused and violates the principle of minimal surprise. You could even have side effects.

Still, type safety is totally possible. In Go, of your unit is Feet, you wrap your int into Feet and you cannot sum it with anything but feet. And that's all compile time, no cost.


Regardless of if comex's comment was fully accurate, I was mostly responding to "Most languages survived without that" which isn't productive.


It's not just some historical curiosity from privative computing days.

Huge successful current languages don't have operator overloading now either. If anything, outside numerical/scientific work, it's even frowned upon.

And even in languages that do have them, it's not like people usually work with typed feet and such units. It is indeed safer, but I'm not sure it's mainstream (F#, Ada, Rebol, etc). Of course one can find such typed unit libs for other languages with Op. Overloading (C++ eg. has boost.units), but it's not like they're mainstream even where Operator Overloading is available.

In any case, it's not the first argument to make against a system language that it doesn't support Op. Overloading as a means for this use case.


C++ and Rust need destructors because they both have unwinding and thus need to know how to destroy things. If there is no unwinding then calling destructors manually is not that bad if compiler checks that you do it. That said, I haven't used zig and don't know how it actually works.


I have yet to use Zig seriously, but I'm pretty sure "defer" is its solution to this problem: https://ziglang.org/documentation/master/#defer


I asked this in another thread, but how do generics work in the absence of destructors? Let's separate the idea of a destructor from the idea that a destructor is implicitly called at the end of a scope.

In C++, if there weren't automatic destructor calls, you could still clean things up manually with a pseudo-destructor call, x.~T(), where T is the type of x.

In C, there are no (user-defined) generics, so you always know the concrete type of x, so you know whether it needs to have a destructor called and what the name of that destructor is.

In Zig, how would generic code create an instance of a generic type T that may or may not need to have a destructor called, ie. how do you know if you need to use defer and what the deferred expression should be? How would tooling be able to check that destructors were called if there is no special way of marking that a destructor exists?


Having used Zig a bit, and C++ for years, I personally believe that this sort of scenario is pretty rare. You can always add a no-op dealloc method to your type if you want a generic function to handle it correctly. Moreover, Zig has awesome metaprogramming support so you can probably think of a way to check the type parameter for a dealloc method and only generate the dealloc call for those types.


> Zig has awesome metaprogramming support

I'm not sure how to reconcile this statement with the following claim from the linked article: "Zig has no macros and no metaprogramming yet still is powerful enough to express complex programs in a clear, non-repetitive way." Not only does it claim that Zig has no metaprogramming, it suggests that it sees metaprogramming as undesirable.


What it does have is compile-time parameters where you can bind a type variable during compilation:

  fn max(comptime T: type, a: T, b: T) T {
      if (T == bool) {
          return a or b;
      } else if (a > b) {
          return a;
      } else {
          return b;
      }
  }
The type variable isn't used at run-time.


Pretty rare? What about ArrayList? I want ArrayList(T) to work whether T = i32 or T = ArrayList(i32).

> You can always add a no-op dealloc method to your type if you want a generic function to handle it correctly

You're looking at it from the perspective of the generic consumer, not the generic author. The generic author generally does not have the ability to edit the type. Plus there are many types that do not have methods at all (integers, points, etc.).

> Moreover, Zig has awesome metaprogramming support so you can probably think of a way to check the type parameter for a dealloc method and only generate the dealloc call for those types.

Yes, you can detect and call a method called deinit that has the appropriate signature using @reflect. You can even put this in a function and call it a pseudo-destructor. But there's no guarantee that deinit has the actual semantics of a destructor without some kind of language-level agreement between class authors.


OK I see your point. Worst case scenario, you force the caller to pass a destructor as an additional argument (e.g. ArrayList(T, fn(*T))) C++11 allows for this in the smart pointer classes.


> I want ArrayList(T) to work whether T = i32 or T = ArrayList(i32).

This will work fine. Do you have a more specific example?


When I call deinit on an ArrayList(ArrayList(i32)) do the elements have deinit called on them?


How are you expecting to call deinit on an i32?


Not sure what you mean. The elements of an ArrayList(ArrayList(i32)) are ArrayList(i32)s.

To answer my question, no, they're not deinited. All deinit does is call self.allocator.free on the slice of elements, and for many allocators that's a nop. In fact none of ArrayLists methods take any kind of ownership of its elements. If you shrink an ArrayList(ArrayList(i32)) by one you leak the last ArrayList(i32). None of the methods call destructors on the elements because there is no generic notion of a destructor, only particular ad-hoc ones like deinit methods. ArrayList appears to solve the problem I mentioned above about generics not knowing if a generic type needs to have a destructor called by only supporting types that don't.

In C++, you'd write the function that clears a vector something like

    void clear() {
        for (auto p = ptr; p != ptr + len; ++p) {
            p->~T();
        }
        len = 0;
    }
For a vector<vector<int>> the syntax p->~T() calls the destructor on a vector<int> element. While for an vector<int> the syntax p->~T() is a pseudo-destructor call, ie. it does nothing. This makes the same generic code work when the elements of a vector need to have a destructor called and when they don't.


> Not sure what you mean. The elements of an ArrayList(ArrayList(i32)) are ArrayList(i32)s.

I believe what he was wasking was "given an ArrayList(i32), how would you expect it to call deinit on the member i32s?". The answer, of course, is that you don't, which is also true of ArrayList(ArrayList(i32)). ArrayList absolutely supports heap-allocated types, you just have to free them yourself before calling deinit() on the ArrayList itself.


Which only brings us again (putting aside what virtue recommends that design over having destructors) to the same original question. If I have an ArrayList(T) for a generic type T, how do I know if the elements need to be freed before I deinit and how do I do that if they do?


I'm having trouble imagining a scenario where you'd have code that was so generic it could take an ArrayList of any arbitrary type and would also be responsible for its destruction.

But if you did, you could use `@TypeInfo` to inspect the inner type for a function named `deinit`, or some other criteria that made sense for this determination.


Any generic data type with a private ArrayList(T) member is an example. Unless you also expect callers to manually run destructors for elements of a type's private members, and their private members, etc. And it's not just about destroying the ArrayList entirely. Any function which just removes elements from an ArrayList needs to know how to destroy elements or else pass the buck for half its purpose to the caller. When I call shrink on an ArrayList(ArrayList(i32)) I'm supposed to preloop over the shrunken-over elements and call deinit on them before I call shrink? When I call the function that removes elements satisfying a predicate I'm supposed to preloop over the elements and call deinit on the ones I'm about to remove? Obviously not.

I already mentioned that hack. All you're doing there is introducing the ad-hoc notion of a destructors as a method named deinit. Again, in order for that to work in generic code that convention needs to be blessed by the language.


> Again, in order for that to work in generic code that convention needs to be blessed by the language.

Technically it only needs to be blessed by the standard library in this case.


That's entirely the point; he/she isn't expecting to call deinit on an i32, but is expecting to call deinit on an ArrayList(i32) if said ArrayList(i32) is in fact an element of an ArrayList(ArrayList(i32)). And calling deinit on an ArrayList(i32) is of course valid, nontrivial and necessary, since such a value needs to own a heap allocation that must be free()d.


> In C, there are no (user-defined) generics

C has had generics since C11. I’ve got a clone of “the good parts” of the std C++ template library (vector, unordered_map, ...) laying around somewhere.


Please do share.


As, below, there's a new statement: '_Generic()'. The expression looks like this:

    _Generic(x, type : value, ..., [default : value])
For a list of unique types, with an optional 'default' for any unrecognized type.

You usually wrap the call to '_Generic' in a macro. For instance:

    #define vec_push_back(vec, val) \
        _Generic(&vec, \
        vecint : vecint_push_back, \
        vecdbl : vecdbl_push_back, \
        default : assert_vwrap)(&vec, val)
Assuming that types 'vecint' and 'vecdbl' have been defined by the user. This does single-parameter parametric type dispatch. There's way to allow the user to use table-driven macros ('X-macros') to allow per-translation-unit definition of an arbitrary number of instantiations.

The nice part, over C++, is that the generic instantiation is localized to a single translation-unit file, so you don't have to yell at the junior devs for instantiating templates without using an 'extern' construct.


I imagine this is about tgmath.h. I've found the use of the word generic a bit misleading there.


No, it's very likely about _Generic. <tgmath.h> was added to C99 in an attempt to steal Fortran mindshare, ie making numerics and scientific computing less cumbersome (the same is true for other features, eg complex numbers or restrict). But in C99, it was still special-cased. The next standard C11 introduced _Generic so people could create their own type-generic interfaces.


If the compiler checks that you do it, sure.

For local variables, I'd say `defer` is an okay but not great substitute. If you call malloc() yourself, then sure, it's not hard to remember to stick a "defer { free(ptr); }" afterwards. But what if you call a function that returns an owned value? Can you tell just from looking at it that you're meant to free the thing afterward?

CoreFoundation, a C library on Apple platforms, has a cute solution to this: a strict naming convention. [1] If a function returns a value at "+1" reference count, i.e. you have to release it yourself, the function's name should include "Create" or "Copy", even where that would otherwise seem unnatural. For example, "CFSocketCopyAddress" returns a socket's address as a CFDataRef, returned at +1. You might expect it to be called "CFSocketGetAddress", but "Get" is reserved for functions that return values at +0.

As I said, cute – but fundamentally a workaround for language limitations. Distinguishing between two kinds of functions is exactly the sort of menial bookkeeping that computers excel at, whereas humans are prone to making mistakes. And – while CoreFoundation is forever stuck with the goal of C compatibility, Objective-C, which used to rely on a very similar system of manual retain/release and naming conventions, did ultimately transition to automatic reference counting.

But it doesn't have to be automatic. One use for destructors is to automatically release locks, and an interesting alternative design for that exists in Clang's "thread safety" annotations. These are annotations you can add to existing C or C++ code that uses mutexes; their main purpose is to catch you if you access a variable protected by a mutex without owning the mutex, but they can also complain if you lock something and forgot to unlock it. You can annotate functions as "requires mutex" (i.e. you must call it with the mutex locked, and it stays locked after the function), "acquire" (call with it unlocked, returns with it locked), "release" (the opposite), and so on. I think I once tried to abuse the annotation system to enforce reference counting, but it was broken. But I'd love to see a more thorough and robust design along these lines.

[1] https://developer.apple.com/library/archive/documentation/Co...


> There are some standard library features that provide and work with heap allocators, but those are optional standard library features, not built into the language itself.

Note that this is also true of Rust. Programs build using `no_std` and `libcore` don't have an allocator. A surprising amount of functionality is still available, like sorting and string splitting.


Yes, but zig takes it a step further by allowing you to pass a heap allocator into the standard array/vector type.

There are discussions about implementing something similar in Rust, but it doesn't exist yet...


C++17 allows that too with all the containers in the std::pmr namespace.


I still don't get why they provide a warn() to stderr but no print() to stdout. [1]

[1] https://ziglang.org/documentation/master/#Hello-World


Here's a modified version with print.

    const std = @import("std");

    pub fn main() !void {
        // If this program is run without stdout attached, exit with an error.
        var stdout_file = try std.io.getStdOut();
        const stdout = &stdout_file.outStream().stream;
        // If this program encounters pipe failure when printing to stdout, exit
        // with an error.
        try stdout.print("number: {}\n", i32(1234));
    }


Why don't you make it:

  const print = @import("std").print;

  pub fn main() void {
      print("Hello, world!\n");
  }
Why does it have to be more complex than warn? You admit the added complexity yourself by using warn instead of print in the parent wiki page.


Because print can fail. See the comments in the above code example.


Why couldn't the print imported in the hypothetical short example above be made to fail? If all the stdout code is packaged into it, with the same try's, it seems like the code could at least be:

  pub fn main() !void {
    try print(...)
  }


And warn cannot fail?


Correct:

    pub fn warn(comptime fmt: []const u8, args: ...) void {
        const held = stderr_mutex.acquire();
        defer held.release();
        const stderr = getStderrStream() catch return;
        stderr.print(fmt, args) catch return;
    }

The tradeoff, however, is that when the function returns, it is not guaranteed that the warning message made it successfully to stderr.

This tradeoff is not appropriate for stdout.

Also the decision to use the mutex is not appropriate for stdout. The application may be single-threaded, or have its own synchronization strategy that makes the mutex redundant.

Zig is optimal.


If something bad happened, that seems much more important to ensure it printed than printing output saying that everything is working just fine.

This trade-off is not self-evidently correct... it's easily boiled down to personal preference. Consistency would be nicer. Zig is not "optimal" just because someone says it.


There's no way to ensure that it is printed, because the environment can prevent you from doing so - stdout/err can be a pipe and it can be closed, for example.

So, suppose warn() fails. What can you do about it - try to print a warning that you can't print warnings?


That's kinda my point. If stdout or stderr fails, what are you going to do about it? The normally correct solution is to panic/terminate the program. Having a simple "print" function that does that is an obvious nicety.

If you want something different, use a library or some other parts of the stdlib. Making that super common use case painful is not obviously a good decision.


If stdout fails and your program's purpose is to print to stdout, for example, if you are `cat`, `less`, `diff`, `ps`, `ls`, etc... then you should print an error message to stderr and exit with an error code.

The super common use case is std.debug.warn, not printing to stdout.


I think it might be helpful if the name of warn() did not imply that the message is actually a warning, i.e. that "something bad happened". There are many legitimate cases for printing purely informational messages to stderr - operation progress, that sort of thing. When someone sees a hello world app in Zig, and there's warn() in it, the first knee-jerk reaction is, "so what are we warning about?".

I would actually even argue that the meaning of "print" as printing to stdout by default is an unfortunate historical artifact. For the kind of stuff print is usually used by most of the code, it really should go to stderr, and "print" kinda implies some kind of logging IMO.

From that perspective, my suggestion would be to add a no-failure @import("std").debug.print that is an alias for warn. I don't think it's likely that breaking the established convention of print-defaults-to-stdout would cause issues in practice - in programs where this matter (like cat), the developer will quickly discover the issue the moment they try using it in a pipeline. And for programs where it doesn't, defaulting to stderr is actually the right thing.

But if you'd rather not break the convention, how about adding debug.log that is an alias for debug.warn? Or debug.say, or something like it.


Standard output for errors is for errors only, or for debugging information if enabled manually. Please, don't abuse stderr, because admins are very dependent on it to be clean. Use stderr for errors and stdout for anything else. If you need your own output stream with your own output rules, just print to file.


stdout is the one that is likely to be redirected somewhere that might fail (e.g. a file, or another program that might crash). So it's the one that you do not want to ignore failures on, not anymore so than you'd ignore an error for stdin.

stderr is the one that is not likely to be redirected somewhere that might fail - that's the whole point of keeping it distinct from stdout! And if it does get redirected, than the person that is requesting redirection is responsible for ensuring lack of failures in that channel.

I think you're approaching this from the perspective of stdout as a place for diagnostic output. It's not - it's a place for actual output (i.e. a function of program's input - the useful result of its operation). Yes, in practice, people routinely use it for diagnostics, but that's exactly the problem that Zig is trying to fix here by being opinionated about it, so far as I can tell.


Thanks for taking the time to explain. Your argument is reasonable. The software I touch is allowed to crash if stdout fails. Maybe zig is indeed not for me.


or you could just make a helper function that ignores errors and call that. We're just arguing about defaults. Zig is of course a general purpose programming language and you can always build less precise tools on top of more precise tools.


What do you mean by "Zig is optimal"?


If you have a command line application, you probably either want to use a library that provides colors and more features, or output to file (where writing to stdout is more appropriate than "print").

When developing, "print" is more commonly used as an aid for the developer. Most "print" messages should either be removed or outsourced to a library when the program nears completion. "warn" fills a similar purpose as "print".

If you really want "print", a small function that writes to stdout is quick to write.


If I'm writing a command line application then, most of the time, I just want to let the user know what's going on. It doesn't always have to be fancy and I don't want to track down a third party library to tell the user all went well and give basic statistics. I'd go ballistic if I had to download a library to print out "35 records processed." It just doesn't make sense.


Then use std.debug.warn?

Quoting the docs linked above:

> Usually you don't want to write to stdout. You want to write to stderr. And you don't care if it fails. It's more like a warning message that you want to emit. For that you can use a simpler API:

    const warn = @import("std").debug.warn;
    
    pub fn main() void {
        warn("Hello, world!\n");
    }


The message "35 records processed" isn't a warning, nor should it necessarily go to stderr.


Tools following the UNIX philosophy pride themselves on only bothering the user with output messages when something goes wrong.


You can still write that message to strout, there just isn't a brief "print" function.


No, if I have a command line application, I probably want to write (possibly formatted) output to stdout -- very likely without colors or other features.

A while ago, I tried to figure how how to do the equivalent of C's

    int n = 42;
    printf("n = %d\n", n);
I was unable to find a straightforward way to do that. (Formatted output to stderr, on the other hand, is easy.) I submitted a bug report and haven't gotten a response.

https://github.com/ziglang/www.ziglang.org/issues/15


warn() is just as complex to write as print(). One is included, the other isn't.

If you classify all C-programs by what output they use (multiple counts allowed), printf/puts shows up with the highest percentage long before ncurses, X or anything else.


What's not included is an easy way to ignore errors when writing to stdout. If that's your complaint, then Zig is not the language for you. Zig also doesn't provide an easy way to ignore allocation failure errors, for example. Edge cases matter. There's no excuse for failing to handle errors.


From what I've seen so far, I really like the clean syntax of Zig. I want to like Rust, but it looks so verbose and loaded with special characters.


I'm curious. What do you (or other people) dislike about special characters. For me, they are more readable than text (maybe I should start writing APL ;-) ).

Granted, I'm dyslexic and apparently dyslexia only affects characters which are phonetic. This has also been my experience and I love Chinese characters in Japanese text because it's much more comfortable for me to read. So that's why I might prefer symbols.

I'm wondering, though, why people dislike symbols (and I know it's a very common thing). It's just like any other letter, except that it has a meaning. I wonder, it is because it's hard to read phonetically (just the opposite of my problem)? Whenever I've asked people they say "Symbols are ugly", which is kind of an unhelpful response. It's like saying, "I don't like it because it is unlikable". I hope someone has a slightly deeper insight into the reasons.


I think you nailed it. I'm so comfortable seeing words that symbols can blur into the background and require extra effort to focus on. Languages with multi character symbols and chains of symbols add to the discomfort.

Once I've learned the language and internalized the semantics they mostly fade away, except in cases like C++ where you get multiple asterisks and whatnot.

I'm not the above poster but I also dislike the symbols usage I've seen in Rust. I'm on mobile so I can't show examples (and I don't know the language at all) but it has the same smell as symbols in C++ though not as bad. Like character silhouettes in Team Fortress 2, symbols need to appear distinct. Chains of them should be limited. My first impressions of Rust felt like the symbols or operators were less than ergonomic. Small or blending together.

Disclaimers about preference, solo vs team, personal vs enterprise, APL vs La-Z-Boy.


The more symbols used, the less it looks like a human-consumable language (note: "looks like", not "is").

The more symbols, the harder it is to unpack the code into (and restate in a) human language. Even in your head. IOW, if you're mostly monoligual, and you think about code in English (or whatever your preferred language is), more symbols can actually make it harder to think about the code.

Trivial example: a = a+b is slightly easier to say (and think) in English than a += b


> The more symbols used, the less it looks like a human-consumable language

I read this as “the more symbols used, the more it looks like mathematics”. I guess this is really a matter of personal taste :)


> Trivial example: a = a+b is slightly easier to say (and think) in English than a += b

Native English speaker here.

"+=" is pronounced "plus equals", meaning that you've got it backwards, with the margin being by the number of mental syllables taken by a - though if a really is "a", then that's only 1.

However, the above considerations only cover recitation.

Comprehension, especially in real-world scenarios, is another matter entirely. "a" might be some other, longer name (and probably amongst a set of long names with a common prefix/accessor). When I read "a = a + b", it takes me some amount of effort to verify whether the lvalue is in fact the same word as the lhs of the operator. When I read "a += b", that effort is not needed. The difference again is proportional to the length of a.


The most something follows the most intuitive way of making one thing, less arbitrarity and change for change itself, it might also be only noise, so the author can claim its a original work just because it uses a different sort of symbols. Byproduct of individuality and ego.

Most people can detect noise and arbitrarity. But if you are making something really original, giving you are being a pioneer into something, in that case you can define the direction of something, where that direction should serve as a guide to things that will follow.

I dont know about others, but my brain tend to be anti-noise (and i fell this pattern in a lot of other people too), because theres too much information for us to consume, that having to handle some people "cheating" to give a apparent originality, can make anyone who feel its the "noisy arbritrary sort of change" that is being presented, to dislike the "noise" almost immediatly.

Technology should be aiming toward zen-budism, less ego and therefore, less disonesty and less noise.


Which special characters bother you the most? Before 1.0 we largely removed most of them.


::, & (pass by ref?), ! (macros), {:?}, _ => (), whatever is happening in this example with <T> and [T]: fn largest<T>(list: &[T]) -> T {

To be fair, some of these are in Zig too, such as &.


I'm a rust newbie, but I have to disagree.

Your "horror example" is actually very readable for most programmers familiar with either Java, C# or c++, I think. In java or C# it would be something like

    T largest<T>(T[] list)
The major difference being the reference syntax in rust, but that's the price of explicit ownership/no GC.

I love the fact that it's obvious where macros are being used.

_, => and () are originally from ML, so I know it from my university days, but these days you also find some or all of them in C#, Java, Kotlin, ES6. So a lot of programmers have already been exposed to them.


Yeah, I guess

* :: is the same as in many other languages

* & too

* ! is unique, I guess

* The format string stuff is based on ... C#/Python stuff? I forget exactly.

* _ => () is composing three other bits of syntax that aren't unusual on their own

* <> is genrics, like in many languages

* [T] is slices, which are not entirely unique but are also similar to array stuff which uses []s

Thanks for elaborating, it's helpful to understand the perspective.


It might be hard for someone who writes Rust every day to see it, but it has a lot of special characters compared to many languages.

It also has less or equal amounts compared to other languages, of course, but I dislike that aspect of those languages too :)

Compare these two listings:

http://rosettacode.org/wiki/Stack#Simple_implementation

http://rosettacode.org/wiki/Stack#Nim

You can see directly that Rust has a lot more there. Obviously Nim skips out on braces so that's a gimme, but I don't really count block braces when I'm talking about special characters. It's more stuff like Option<&'a Frame<T>> or &'a mut T or ->


> It also has less or equal amounts compared to other languages, of course, but I dislike that aspect of those languages too :)

Yeah, thanks, I think this is at the root of a lot of it.

> It's more stuff like Option<&'a Frame<T>> or &'a mut T or ->

Yeah; some of the stuff isn't needed anymore; you can drop the : 'a s, for example. And we have been talking about allowing some more elision, so

  pub struct Iter<'a, T: 'a> { 
      next: Option<&'a Frame<T>>,
  }
could be

  pub struct Iter<'_, T> { 
      next: Option<&Frame<T>>, 
  }
when all is said and done. The '_ in the signature is important so that you know it's parameterized by a lifetime, so we don't plan on ever getting more concise than that.


:: probably looks unfamiliar to anyone who hasn’t worked in C++, PHP, or Perl.


:: is used in Java too, ever since Java 8.


So 50% of Java devs :)


Steve I wish some of the reg deref stuff used pattern matching for cleanliness instead of &.

I want to say I like you how you guys develop in the open and how you in particular are out and about in the discussions here.

Also, can you make a post someday about your experience with WSL?


> used pattern matching for cleanliness instead of &.

Could you expand on this, I don't quite know what you mean, but I'm interested!

> I want to say I like...

Thanks!

> your experience with WSL?

My experience was, "This is pretty cool!" but, in the end, I don't use WSL anymore and just use regular old Windows. I can elaborate more if you'd like, but it boils down to "basically everything has windows native versions anyway and it works a bit better that way."


    a::b::c::<T>()?  // Rust
    try a.b.c(T)   // Zig


That’s not what Rust looks like normally, though; usually you’d use “use” and the T is inferred.

The :: and <> are also not particularly unusual syntax for their respective features either.


a::b::c is ugly compared to a.b.c whether you put it in an expression or in a use statement and ::<T>()? is not that rare (ex. rdr.read_u16::<BigEndian>()?).

You asked what bothered us, not what was unusual :)


Oh totally! I mean it as a way of sorting out why. Like, it would bother me more if I had to write it more, so maybe it’s that styles are different or some libraries require it more often? byteorder is one of those crates, it’s true, and I don’t use it super often. So this is helpful!


> C++, Rust, and D have a large number of features and it can be distracting from the actual meaning of the application you are working on.

I've just started working in Rust and found it to have the right number of features compared to C++.


The article states

> Zig has no macros and no metaprogramming

but then in another place about zig[1]:

> Compile-time reflection and compile-time code execution

This does sound arbitrary more complex than macros (and is what actually got me interested in zig).

[1] https://www.patreon.com/andrewrk


I think the original idea behind compile-time evaluation is that it simplifies metaprogramming (compared to e.g. templates or macros), because there's only one set of evaluation rules - so the language works the same way, it's just that some parts of the program are run at compile time.


Compile-time evaluation is for faster startup (initialization is done at compile time).


AFAIK Nim has the same, but they call it by its name, hehe


Can you link to Nim's userland implementation of printf? Here's zig: https://github.com/ziglang/zig/blob/822d4fa216ea8f598e4a9d53...


I don't think Nim has one in the standard library, but people have built variations on it. Here's a couple:

* https://github.com/bluenote10/nim-stringinterpolation/blob/m... This one does string interpolation and a printf-like syntax with compile time type checking. It does call out to C but only after it's already parsed and validated and transformed the format string.

* https://github.com/kaushalmodi/strfmt/blob/master/strfmt.nim This one implements a more complex string formatting syntax with lots of convenience functions. It's more complex but provides a lot more functionality than the previous one without relying on FFI.

I'm not a Nim programmer but I imagine those two demonstrate how Nim metaprogramming works even if neither exactly duplicates the behavior of printf.


Nim also has this in the stdlib now: https://nim-lang.org/docs/strformat.html


What I meant was "macros written in Nim for Nim" which they called macros and not "we don't have macros, because we have them, but we don't call them macros, so we can say we are simpler because we don't have macros!" XD


This is a tetris clone made in zig https://github.com/andrewrk/tetris

I bit hard to follow without the syntax highlighting, but seems a good example of what a larger application would look like.



What I want in a systems language is heap allocations that are typed. A Monad basically to wrap the heap context.


Heh, property keyword was my favorite in Delphi/C# and I really missed it in C++. So Zig is not for me.


I have nothing against Zig, but... am I the only one feeling "new programming language fatigue" lately? Seems like there are several new ones per week these days.


As language geek I love playing around with new languages.

For production code I have learned to choose programming languages that are first class citzen in the platform or libraries that are required for a given project.

While it might seem boring or outdated, it is much more productive and future proof than picking the language and then sort out how to integrate them.


One of my teachers at university kept repeating that you can't really call yourself a programmer before you've implemented at least one language. I don't know about that, everyone is entitled to an opinion.

What I do know is that it's one of precious few things in software that I still find interesting enough to bother [0]. I'm guessing part of the answer is that this is where you end up sooner or later.

And then we have the internet, which is a big part of the equation. It's easy to get numb, but I remember a time when information about implementing interpreters/compilers was VERY hard to come by.

It's all good from my perspective, we're barely scratching the surface of what is possible and there are plenty of good ideas left to rediscover in our history.

But I do wish that more designers would dare to step outside of the box more. Creating a language isn't about cherry picking features from existing languages, it's about finding better ways of solving problems.

[0] https://github.com/codr4life/snabl


What I like about Zig is that its comptime facility makes the compiler a mixed evaluator in Andrei Ershov's sense: the compiler works by, for comptime structures, evaluating them, and for non-comptime structures, emitting a residual program that will evaluate them at run-time. It offers an accessible form of partial evaluation.


I get what you mean, but Zig is a little different because it's not a naive, vanity project. All of its motivations are practical and related to the usability of the language.


It's getting easier to build them.


It's not - it's getting harder to build worthwhile ones.

The barrier to entry is higher, the expectations on the size of the standard library are higher, the expected quality of tools is higher.


Everything in your second sentence is true, but it seems like LLVM has been a huge boost. I think a lot more programmers are comfortable with parsing and ASTs than they are with the x86 instruction set. Not having to write IL->CPU codegen or peephole optimizations significantly lowers barrier for entry to compiled PL development.

(I say this as just an observer, having written interpreters but no compiled languages.)


But it's getting easier to just implement a language at all. That might be all you need to hit HN. No one said anything about "worthwhile" ones.


That says a lot more about HN than about anything else...


It says that we look for more in an idea than pure utility, that cleverness and beauty also count. That's a good thing.


> That's a good thing.

Maybe, maybe not, but luckily that question's not very important. Here's what is:

Where is the cleverness and beauty in Yet Another Boring LLVM Frontend(tm)? Zig and Rust are more than that, I'll readily grant - but that description does fall under the umbrella of "implementing a language".


I usually agree, but at least Zig has something new to bring to the table (allocation safety).


Programming languages has been one of my major subjects at the university and I don't see anything new in Zig regarding memory allocation.


The only thing that's new is the convention to actually handle out of memory correctly, and the convention to accept an allocator as a parameter rather than using a global allocator.


Yep, nothing new unless one only knows C.


I love seeing new programming languages. Obviously one expects most of them will never get anywhere but it's interesting to see the ways in which people want to be able to make programs but are unable to in existing programming languages, expressed in the form of a language they'd want to use.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: