Hacker Newsnew | past | comments | ask | show | jobs | submit | more pornel's commentslogin

What's the threat model? If you're reviewing untrusted or security-critical code and it's incomprehensible, for any reason, then it's a reject.

Syntax alone can't stop sufficiently determined fools. Lisp has famously simple syntax, but can easily be written in an incomprehensible way. Assembly languages have very restrictive syntax, but that doesn't make them easy to comprehend.

Rust already has a pretty strong type system and tons of lints that stop more bad programs than many other languages.


Many modern language designers focus on shaping expressibility rather than providing the maximum possible flexibility because their designers learned from C, Lisp and other languages that made mistakes. Examples lamguages are Java, C#, D, Go... some arguably with more success than others. But language design that gave ultimate expressive power to the the programmer is a relic of the past.


???

"Expressibility" and "expressive power" are vague and subjective, so it's not clear what you mean.

I suppose you object to orthogonality in the syntax? Golang and Java definitely lack it.

But you also mention C in the context of "maximum possible flexibility"? There's barely any in there. I can only agree it has mistakes for others to learn from.

There's hardly any commonality between the languages you list. C# keeps adding clever syntax sugar, while Go officially gave up on removing its noisiest boilerplate.

D has fun stuff like UFCS, template metaprogramming, string mixins, lambdas — enough to create "incomprehensible" code if you wanted to.

You're talking about modern languages vs relics of the past, but all the languages you mention are older than Rust.


Have you ever seen submissions to IOCCC or Underhanded C Code Contest? That is what too much syntactic flexibility looks like (if taken to the extreme).

If you want your code to be secure, you need it to be correct. And in order for it to be correct, it needs to be comprehensible first. And that requires syntax and semantics devoid of weird surprises.


Eh, I'm not sure I agree here. This feels sort of along the lines of, "well C is safe because someone can review your code and reject it if you try to dereference a possibly-NULL pointer".

The point of a language that is "safe" along some axes is that it makes those unsafe things impossible to represent, either by omitting an unsafe feature entirely, or making it a compile-time error to do unsafe/unsound things.

I will admit that this is something of a grey area, since we're talking about logic errors here and not (for example) memory-safety bugs. It's a bit muddier.

In general, though, I do agree that people should write code that is reasonable to read, and if a reviewer thinks some code in a PR is incomprehensible, they should reject it.


I think these situations are very different, because Weird Rust affects only weird code, while unsafety of C affects regular C code.

The difficulty in reviewing pointer dereferences is in reasoning about potential program's states and necessary preconditions, which C won't do for you. You can have neatly written C using very simple syntax, and still have no idea if it's safe or not. Solving that lack of clarity requires much than syntax-level changes.

OTOH the Weird Rust examples are not a problem you get in your own code. It's a local syntax problem, and it doesn't require complex whole-program reasoning. The stakes are also lower, because you still have the same safety checks, type checks, automatic memory management, immutability. The compiler aggressively warns about unreachable code and unused/unread variables, so it's not easy to write undetected Weird code.

Rust tried having Underhanded Code Contest, but it has been very Underwhelming.


That's how Rust works already.

The problem has been created by Docker which destroys all of the state. If this was C, you'd also end up losing all of the object files and rebuilding them every time.


Nope, reread the article, docker wasn't part of the problem it's part of the 'solution' according to OP.


Zope was cool in that you couldn't generate ill-formed markup, and optionally wrapping something in `<a>` didn't need repeating the same condition for `</a>`.

However, it was much simpler imperative language with some macros.

XSLT is more like a set of queries competing to run against a document, and it's easy to make something incomprehensibly complex if you're not careful.


No, APNG explodes in size in that case.

In APNG it's either the same 256 colors for the whole animation, or you have to use 24-bit color. That makes the pixel data 3 times larger, which makes zlib's compression window effectively 3 times smaller, hurting compression.

OTOH GIF can add 256 new colors with each frame, so it can exceed 256 colors without the cost of switching all the way to 16.7 million colors.


It's absolutely possible. Browsers even routinely pause playback when images aren't visible on screen.

They just don't have a proper UI and JS APIs exposed, and there's nothing stopping them from adding that.

IMO browsers are just stuck with tech debt, and maintainin a no-longer-relevant distinction between "animations" and "videos". Every supported codec should work wherever GIF/APNG work and vice versa.

It's not even a performance or complexity issue, e.g. browsers support AVIF "animations" as images, even though they're literally fully-featured AV1 videos, only wrapped in a "pretend I'm an image" metadata.


> They just don't have a proper UI and JS APIs exposed, and there's nothing stopping them from adding that.

Browsers should just allow animated gifs and apngs in <video>


More important would be to allow (silent) videos in <img>.


I wish browsers still paused all animations when the user hits the Esc key. It's hard to read when there are distracting animations all over most pages.


AV1 supports YCoCg, which encodes RGB losslessly.

It is a bit-reversible rotation of the RGB cube. It makes the channels look more like luma and chroma that the codec expects.


False.

8-bit YCoCg (even when using the reversible YCoCg-R [1] scheme) cannot represent 8-bit RGB losslessly. The chroma channels would need 9 bits of precision to losslessly recover the original 8-bit RGB values.

[1] https://www.microsoft.com/en-us/research/wp-content/uploads/...


AVIF supports 10 and 12 bit encoding, which losslessly fits the 9-bit rotation of 8-bit data.

It's also possible to directly encode RGB (channels ordered as GBR) when you set identity matrix coefficients, it's just less efficient.

I've implemented this in my AVIF encoder, so I know what I'm saying.


Show me any of the popular image conversion tools (avifenc, imagemagick, photoshop, ffmpeg, whatever...) that does the identity matrix hack when asking for lossless AVIF. None of them do it. Many people have been burned by "lossless" AVIF, where they converted their images in the mistaken belief that the result will be bit-identical to the original, only to find out that this wasn't the case, after they've deleted the original files.


That's shifting the goalposts from what the standard supports to the current state of the ecosystem. It's certainly an interesting point though. If common implementations all have bugs regarding lossless encoding that's a pretty bad situation.


I don't get it why is performance such a massive issue that they still need to have an artificially low limit on the number of windows.

M-series iPads are more powerful than most of Apple's Mac Pros were. They have 8GB of RAM, but until recently so did Apple's best-selling MacBook models.


  > artificially low limit on the number of windows
afaik its because ios has swap disabled so even with 8gb its gonna be tight with some apps (or even just browsing some heavy pages)


It's a "unit" in the sense of calling `rustc` once, but it's not a minimal unit of work. It's not directly comparable to what C does.

Rust has incremental compilation within a crate. It also splits optimization work into many parallel codegen units. The compiler front-end is also becoming parallel within crates.

The advantage is that there can be common shared state (equivalent of parsing C headers) in RAM, used for the entire crate. Otherwise it would need to be collected, written out to disk, and reloaded/reparsed by different compiler invocations much more often.


> Rust has incremental compilation within a crate. It also splits optimization work into many parallel codegen units.

Eh, it does, but it's not currently very good at this in my experience. Nothing unfixable AFAIK (and the parallel frontend can help (but is currently a significant regression on small crates)), but currently splitting things into smaller crates can often lead to much faster compiles.


Yes the actual implementation is far from what could be, but the argument was that it's not a language design issue, but an implementation one.


Agreed on that


These kinds of perfectionism complaints keep the status quo of FPTP, which is the worst of them all.


There is a circular dependency — the language strongly influences what libraries/engines can and will be written.

Bevy and Servo wouldn't exist without Rust. Unreal probably wouldn't succeed without C++.

Languages may also matter for other reasons than just their feature set. Node.js got traction specifically because it was JavaScript.

Even though Fortran had state-of-the-art numeric libraries, Python enabled numpy to have the sweet spot of usability with good enough speed.

The killer libraries need years of effort to build them. That won't happen if users don't want to use the language, or the language isn't good enough for the task.

For Swift to have killer libraries, users must first choose Swift to build them. Catch22.


They chose Swift if they want to get money on Apple's ecosystem.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: