Hacker News new | past | comments | ask | show | jobs | submit login
Fuchsia Programming Language Policy (googlesource.com)
308 points by farmerbb on Feb 25, 2020 | hide | past | favorite | 297 comments



Very interesting to see a technology focused analysis of what Google thinks about the strengths and weaknesses of the different languages are.

Usually analysis's embed personal biases or are either marketing sponsored posts to spur adoption but this looks to genuinely look at the technical merits of each language in the context for using it to develop Fuchsia OS.

From this analysis C++ and Dart are given the green light, Dart for high-level code as "Asynchronous programs can be written using straight-line code and People using the language are highly productive." but because of its GC and substantial runtime environment it's "more resource intensive" and not ideal for programs that run indefinitely.

The comparisons between Google's designed and controlled "Go" vs Mozilla's sponsored "Rust" is very interesting, since Go is widely used within Google and its implementation could be influenced by Fuchsia it was initially used but because of their negative experiences it's been blacklisted with all code except netstack needing to be migrated to an approved language.

The biggest con of Rust seems to be that it's still new, not widely used and its unique properties haven't been time tested enough yet, but as it's still approved as it's more performant and requires less resources than Go.


NVidia has decided to embrace Ada/SPARK after a study with Ada, Frama-C and Rust, and there is a Webinar with their decision rational.

They also had similar reasons, however lack of ISO standard and certified compilers also played a role.


Link to NVIDIA's "Securing the Future of Safety and Security of Embedded Software" webinar about Ada/Spark:

https://www.adacore.com/webinars/securing-future-of-embedded...


They mentioned one of the pros of C that nobody seems to talk about. The stable ABI that allows language interoperability via FFI. Until another language gets this right, C is going to be ever present at the bottom of every stack.


The C ABI doesn't require the presence of C in the stack, it can just be there in spirit like the ghost of Pascal in the Windows ABI.


Yes, but in practice it's often easier to just fire up the C compiler and do some shim files and linking in the bottom.


Easier than what? Their are a lot of languages that make exporting/importing C ABI dead simple. Rust, C++ and C# are some I’m familiar with, I’ve also been pleasantly surprised at Node.js (for imports) in this regard.


At least, the JVM and .NET have provided stable cross-language ABIs that are much more sophisticated than C for a long time now. You can even build operating systems with them. The Fuschia guys chose not to, but it's not like it's technically impossible or anything like that.


>Until another language gets this right

Yes? In the next point they mention that pretty much all languages support the C ABI.


The document is written by a manager and some merits are for the purpose of Google's corporate politics, which aren't really technical merits a programmer expects. The context and selection criteria are not specified either, although they are fairly obvious, e.g. go didn't fly because it's entrenched in its niche - network services in data centers - and no amount of control can make that niche intersect with Fuchsia niche.


but because of their negative experiences it's been blacklisted with all code except netstack needing to be migrated to an approved language.

Have you got a reference for that, it seems pretty damning if it's true.


Did you read the section for Go?

> Con: The Fuchsia Platform Source Tree has had negative implementation experience using Go.

Decision

> Go is not approved, with the following exceptions: netstack ... In the fullness of time, we should migrate

> All other uses of Go in Fuchsia for production software on the target device must be migrated to an approved language.


Sorry, I mis-interpreted your statement to mean the whole of Google.


Full quote:

> Con: The Fuchsia Platform Source Tree has had negative implementation experience using Go. The system components the Fuchsia project has built in Go have used more memory and kernel resources than their counterparts (or replacements) the Fuchsia project has built using C++ or Rust.

I wonder if they had other negative experiences beside the higher resource usage? Because it's hardly surprising that a garbage-collected language should have higher resource usage than C++/Rust. They could have still supported it for app development however - but it seems they took a "there can only be one" approach and went for Dart instead.


Supporting Go for app development would mean having to design a generics free API.

Also most likely why Android will never support Go officially.


Generics Free, No exception handling, also hard to support to shared libraries.

That's more cost than benefit.


> also hard to support to shared libraries

why?


Go doesn't really have a standard/stable ABI. The way you include one module inside another is by doing what amounts to a big #include on the source code.


This policy hasn't changed in over a year. I'm on paternity leave now, but my job is Go on Fuchsia, and I work with people doing Rust on Fuchsia. None of us are concerned for our jobs based on this document (which we've collaborated on).

This policy, like most technical decisions, may be amended when things change. We want people to have a consistent and stable platform to develop on, and if a language doesn't officially support our platform, it kind of doesn't make sense to support that language. And there's no commitment to support these languages for production services and end user development until there's a story for the stability of that toolchain on Fuchsia.

This shouldn't be surprising. Make a new system, bootstrap your programming environments. Why bother offering support for environments you've not yet bootstrapped?

As a thought experiment, consider the thousands of languages (including the tens or hundreds of popular ones) not listed on that page, and whether they're supported.

(Edit: I accidentally a word.)


Is there anything (in the plans) to make fidl cross platform?


Not sure what you mean. FIDL as a language and protocol is conceptually inherently cross platform. There are already bindings for multiple language platforms that can be generated from a FIDL specification, and theoretically one could implement a FIDL service anywhere. That said, FIDL services in the system provide a sort of ABI -- the F in FIDL stands for Fuchsia, after all -- and I'm not aware of any actual efforts to implement these on platforms that aren't Fuchsia.


Thanks! I was more going about the ecosystem - like grpc/protobuf have and support across languages/runtimes/systems.


We use FIDL on host (Linux & macos) platforms via the overnet project. As far as language ecosystems go, as swetland rightly points out below, being runtime agnostic is important to us. FIDL being easy to implement generators for new runtimes is part of that. There is some nascent documentation on porting runtimes here: https://fuchsia.dev/fuchsia-src/development/languages/new?hl...


Thank you so much for this information!!!


For the distribution of languages inside fuchsia, this is the output of "tokei -s lines" in the git checkout of fuchsia: https://gist.githubusercontent.com/est31/5c13979043e760a597a...

According to this, Rust is the language with the most lines in fuchsia. It's important to point out however that of those 2.2 million lines, 1.4 million come from the third_party directory, which includes vendored libraries, mostly Rust ones, and sometimes also multiple versions of a library. The src directory contributes 0.7 million lines of Rust. If you add up the two numbers, you get 2.1 million, so most Rust code resides in those two directories.

This is the tokei output of the src directory: https://gist.githubusercontent.com/est31/5c13979043e760a597a...

To compare those 0.7 million with other big Rust codebases: Servo has 0.39 million lines, the Rust compiler 1.4, and parity-ethereum has 0.18 (using tokei and only looking at the git repos here, without excluding and tests or including dependencies outside of the recursively checked out repo).


Not sure what these results is meant to be measuring since most UI Apps are written in Dart which is unlikely captured by the 10k LOC across 123 files reported.

Unfortunately since they've removed their mirror on GitHub we can't easily compare these results vs GitHub project stats, unfortunately their https://fuchsia.googlesource.com UI is particularly useless at showing any kind of aggregate analysis.


I cloned the fuchsia repo, this one: https://fuchsia.googlesource.com/fuchsia/

It's the main repo but some components are outside. I presume that the UI apps are among those components.



does the src directory include any vendored software for the other languages? Or is that also only fuchsia code?


Could you clarify your question? The fuchsia subdir contains various user space components of the OS, like a network stack, bluetooth, graphics drivers, etc. Also some tools and test code. Rust is used in most of the components I looked at.


Edited my comment. Didn’t notice that vendored got spelling corrected to censored :(


Ok now your question makes sense, thanks.

> does the src directory include any vendored software for the other languages? Or is that also only fuchsia code?

It doesn't contain vendored software. The third_party dir is responsible for that, at least for Rust and Go. The C libraries used are in separate repos I think.


A more positive take than many of the comments here: this seems like an thoughtful and balanced synthesis of the various tradeoffs between languages for systems development, at least from the perspective of a large project with many developers.


> Go is not approved, [...] All other uses of Go in Fuchsia for production software on the target device must be migrated to an approved language.

Probably makes sense, not what Go was designed for, but I really don't get big-G's choice of "one different language for each niche"...

I mean, ffs, Go and Dart are both garbage-collected and compiled, even their lists of pros and cons look similar. Couldn't they just blend their features into one language (like, eg., add generics + some syntactic sugar to Go, to make it more usable for app and GUI code too?) instead of fragmenting the mindspace even more? Why don't people see the advantage of "universal" languages? It's obvious that developer love them and they are empowered by them, hence the success of languages like Javascript/Typescript and Kotlin despite their obvious flaws and limitations!


Go has been a minimalist language from inception. It's not "let's try to make the best language, period", it's "let's explore what benefits we can gain from sticking to a limited subset of language designs that hasn't been seriously tried in decades".

Dart might be a bit of a kitchen sink language (what's it with Danes and operator overloading?), but you couldn't port over the minimalism of Go by addition, it would not be minimalism anymore.

Dart innovations like "collection if/for" syntax seem laughably trivial, but when you look at the examples for declarative UI it should become immediately clear that they reduce cognitive load a lot compared to the equivalent flatMaps or imperative state buildup. Those syntax goodies seem really nice in the scope Dart is made for, but I think that they would look seriously out of place in a more general purpose language.


> "collection if/for"

I knew exactly how handy that was the first time I saw it.


Even after having publicly sung its praise a part of me still considers that feature outrageously bad taste. There could be a nicely orthogonal language feature instead, with universal applicability for good knows what, instead of this highly pragmatic superficial syntax hack! I think it's the part of me that loves Scala. (I do, but I'm also installing Flutter SDK while writing these lines)


Maybe it’s a syntax hack. But, man, sometimes you just need to get a thing done and... it works. When you see it, it makes sense and if it makes sense and works... I don’t really care how “pure” it is.


> Why don't people see the advantage of "universal" languages? It's obvious that developer love them and they are empowered by them, hence the success of languages like Javascript/Typescript and Kotlin despite their obvious flaws and limitations!

It's curious to argue that we should all be content with "universal" languages and stop developing new ones by citing TypeScript and Kotlin, two brand new languages created specifically to replace the already existing "universal" languages of JavaScript and Java.


They're "universal" languages replacing other "universal" languages. That's orthogonal to parent's point that "universal" languages are better.


I seriously thought Dart was abandoned. Is it actually being used outside of Google?


Dart was the fastest-growing language (growing more than 500% YoY) in the latest GitHub State of the Octoverse report.

https://octoverse.github.com/

It's #23 in the current TIOBE index.

https://www.tiobe.com/tiobe-index/

It's a long, long way from being as widely used as the top 10 languages, but many people outside of Google are using it.

Since Dart is the language of Flutter, which is growing in popularity, it should substantially increase its user base in the years ahead.


I think that Flutter is helping a lot with the adoption of Dart. I learned Dart to work on a Flutter project and ended up being quite impressed with it (and Flutter for that matter). Jetbrains IDE support is quite good too.


Living under a rock?

It was rescued by the AdWords team, and later the Flutter team decided to use it instead of JavaScript, in the process they turned Dart into a strongly typed language with type inference (thus everyone from the dynamic camp left the design team), and nowadays the future of Flutter and Dart are tied together.

Flutter became enough of a nuisance that Android team has now come up with Jetpack Composer, to detriment of the existing Android UI toolkits, because they need to have their Java/Kotlin Flutter.


Quoting HN rules:

> Be kind. Don't be snarky.

Your comment would have been quite helpful if you just removed the first line.


Noted.


No one from Go community would touch such a "Frankenstein", it remains to be seen if Go 2.0 will really add generics or if it will be shot down like the improved error handling proposal.


Will Go 2 really happen?


I still look forward to it, lets see.


Let me get this straight, your solution to not fragmenting the mind space is to make a new language only used in Fuchsia?

Relevant XKCD: https://xkcd.com/927/


> ...and supports for production software on the target device

Assuming I understand this right... why should an operation system limit the programming language used for creating applications in at all? That's a bad trend to follow which unfortunately seems to be quite common on more recent platforms.

Since C seems to be supported (which I assume means: there are C headers for operating system APIs, and which btw is a great thing), wouldn't any language work which can call into C APIs (which is nearly every programming language on the planet).

E.g. even if the OS doesn't "officially" offer Go bindings, why should a third-party not be able to come up with Go bindings via the C-APIs? Also "Con: The toolchain produces large binaries." is laughable, because from the POV of a C programmer, everything else out there produces "large binaries" ;)


It's ambiguous. "Not supported" might mean "we aren't going to write APIs in that language, and if you manage to hook in via FFI we don't promise stability. Also, since Fuchsia is security oriented, it probably will block various low level tricks for accessing system resources that don't use official APIs.

Reading between the lines, it seems Dart isn't better than Go from a performance perspective, but Dart is great for UIs so it is worth paying it's cost on an embedded device.


Interesting that the only con listed for Rust is Not enough people are using it, compared to the serious criticisms of the others.


The reasoning does sound serious: "The properties of the language are not yet well-understood, having selected an unusual language design point (e.g., borrow checker) and having existed only for a relatively short period of time". It isn't just "it isn't popular thus it is not good" but "it does something weird that no other language does and because few people use it, we haven't yet figured out what potential issues that weird part may have".


Reads to me like a slightly more general way of saying "this is not the time to specify an API in Rust that you intend to maintain in perfect backward compatibility forever". But if they said that directly they would be inviting a wave of counter-arguments that they most likely already went through themselves, repeatedly, considering how much love Rust is getting for in-tree use (where APIs can be refactored at will). I believe that their main issue is that they don't want to end up with the Rust equivalent of a Java API stuck with Vector and Enumeration.


After having 2.2 million lines of code in rust in their project they should know a bit better imho; that's more than any other language they use; with go they knew quite quickly it's not good for them.


Usually language related issues are only understood when a project scales beyond a core team of experts.


Apparently most of those lines are vendored dependencies.


As someone who’s project was vendored into Fuchsia, I will say that there were a number of contributions that were made directly to the project to support its use in Fuchsia.

This means that while the overall project is used outside of Fuchsia, google is contributing upstream patches, to many of those projects specifically to support Fuchsia. At that point it’s not a clean separation of what’s “in house” vs. not.


0.7 million of of them are their own.


Yes, but if you think about it, it doesn't make the argument weaker but arguably stronger - not only they use all those lines of code but also it seems that you can do it, ie. use low level (I'm guessing) 3rd party libraries and it will work for you; and they must be having high bar for quality, they operate on low level etc.


A relevant factor could be in the other con

Con: None of our current end-developers use Rust.


I seriously doubt that whether any of their “end users” are allowed to use Rust.


End users here means user-space application developers


> does sound serious

Do you mean legitimate, instead of serious? The issue you point out sounds more like technical debt and verification/specification rather than something that can’t or won’t be overcome.


Yes i meant legitimate, though i'm not sure why you say that it sounds like technical debt. I'd say it is the opposite - trying to avoid technical debt that could happen by betting on something unknown and instead staying on the safe side.


I meant technical debt for the Rust project, not for fuchsia. I don’t disagree, Fuchsia would end up incurring this debt in the kernel too.

I would say, that as debt goes, this seems like something not egregious, but I can understand not wanting to take it on. But, with every passing year and no significant flaws having been discovered (I mean language destroying not some of the unsoundness bugs that exist), empirical evidence is getting stronger and stronger that Rust has an excellent model.


That doesn't seem like a non-serious criticism to me. They're trying to build something huge that's of immense strategic importance looking forward potentially decades. It seems appropriate to adopt the utmost caution about incorporating a language that's promising but for which widespread traction might not materialize as expected. Though to be fair, the same (and more) might be said of Dart...


The difference I think is that Dart is a dependency they have already accepted on the level of "we'll maintain it if necessary" (probably in the form of the intra-Google equivalent of an acqui-hire, I wouldn't even be surprise if Dart/Flutter were already reporting to Fuchsia, while publicly still appearing as equal peers), whereas Rust is a big scary NIH.

And still the decision on Rust reads very much like a "we'd love to extend the scope of the time is right" whereas the dismissal of Go is surprisingly brutal, almost reads as if there was a cold civil war going on between the two garbage collected Google languages and Fuchsia people feel need to to demonstrate loyalty to their Darters. Might even just mirror an equal dismissal regarding server side Dart.


> Might even just mirror an equal dismissal regarding server side Dart.

That'd be my guess. Given the nature of Flutter and its co-development with Dart it's not surprising that Fuchsia prioritizes it. After all, you need to implement the UI in something. Meanwhile I have literally never heard of a UI implemented in Go, and outside of the UI I can see why they don't want to use garbage collected languages in their OS.


> I have literally never heard of a UI implemented in Go

I expect it has the same issue as UI in Rust: UI is one of the domain where OO inheritance is most convenient and most deeply embedded. So langages which don’t do inheritance are hard sells.

Newer “declarative” UI frameworks less so but they’re probably not mature enough conceptually that you’d want to bet your OS’s core UI system on them, right now they’d be used as an overlay on the core stateful UI system (see bodil’s vgtk for example).


Flutter is a declarative UI framework in Dart, and is the core system UI for Fuchsia. Shallow inheritance is used extensively there


There are a couple of them, including some books, but yeah there are better alternatives out there.


Yeah, but when something is built in-house like Dart is, I imagine there's some level of support the Fuchsia team will get.


The world of paid Rust language contributors is so small that Google could easily get all the advantages of "built in-house" for Rust if they spent a relatively tiny amount of money.

Being pretty conservative overall but betting the house on Dart for the UI seems like a strange combination of decisions to me.


They don't seem to be betting on Dart by itself as much as they are betting on Flutter, which is already reasonably successful and relies on familiar reactive component concepts popular on the web and well tested in Google's own web frameworks (Polymer, Angular).


> They don't seem to be betting on Dart by itself as much as they are betting on Flutter

You don’t get Flutter without Dart. Anyone that’s ever looked at what is behind any Flutter component can see the Dart code building it.

You literally can not bet on Flutter and not Dart. Don’t confuse Flutter as some DeclarativeUI of its own.

EDIT: How we that’s not to say the cart can’t lead the horse (good analogy). Flutter requirements are definitely driving changes in Dart Lang.


I agree with you, the point I was trying to convey is that Google isn't betting on Dart as a pioneering, untested technology as Rust is.

Flutter demonstrates Dart is a good choice; it gives you a successful declarative UI framework that effectively builds on Dart as a fairly straightforward upgrade of the most tried and true UI scripting language ever made: Javascript.

To answer the GP comment, betting the house on Dart for the UI doesn't seem like a strange or risky decision in that light.


That you like Dart and that it's a good fit for UI development doesn't make it less risky.

It still seems incongruous that widespread usage is portrayed as an important criterion for Fuschia PLs, but they bet big on Flutter which forces them to adopt Dart, a language which has very little uptake outside Flutter.


I'm not sure how you got "I like Dart" from my two comments. I'm clearly saying that Dart & Flutter are based on very popular and well tested concepts/structures, therefore it is not very risky. Dart & Flutter by themselves may not be very widespread, but it's very familiar and easy to adopt to anyone who has done declarative UI web development in Javascript or Javascript-like language, which are widespread.

Rust, on the other hand, is treading new ground with the unusual core concept of a borrow checker.


> To answer the GP comment, betting the house on Dart for the UI doesn't seem like a strange or risky decision in that light.

Yea. Before using Flutter, I probably wouldn’t have agreed. After using, I can’t disagree at all.


Is there a conspiratorial side of Mozilla vs Google in this Rust vs Dart story?


The languages are as different as Python and C++, I honestly don't really see a Rust vs Dart story. Note that Rust is approved for use within much of the source tree, but outside it is only not supported. Johnny end-developer, whoever that is, could still use Rust if he insisted, coding against C bindings.


Just like it happens in Android.

If Johnny doesn't like JVM languages, or C++, all they get is a bare bones C API, which requires JNI even for opening files, asking for permissions and so forth.


I don't know what "conspiracy" you might be referring to. There's a natural tendency for organizations to gravitate toward tools, such as programming languages, that originated there. That's just human nature, though, not anything nefarious.


> They're trying to build something huge that's of immense strategic importance looking forward potentially decades.

This seems like an exaggeration. Maybe Fuchsia will turn out to be important, maybe it won't. As of right now, it's another Google vanity project that makes them no money.

It could turn out to be enormously strategically important by allowing Google to drive a stake through the heart of Linux on consumer devices...or just as easily get killed tomorrow.


There isn't a lot of good software written in Rust, open source or not. It's a lot of small and half-finished stuff here and there. The actual software out there is the ultimate, unforgiving, unfraudable review of a language, not what people say.

Developers seem to like Rust (or at least pay lip service). It's understandable. We are all suckers for a golden hammer. Rust promises no data races, no dangling pointers, high performance, and best of all it can run on numerous targets, making it a contender for The Last Language you have to know.

But you don't judge a restaurant's performance by the holiness of the chef's choice of tools in the eyes of other chefs. What is the most profitable piece of Rust software or the piece of Rust software that the most people depend on? If it wants to be taken seriously at the systems programming table, then we should see the Unix coreutils written in Rust (with all the command line flags working exactly the same way). Come on, replace GNU. You say you can do it faster and safer than everyone else. Let's see it.


> What is the most profitable piece of Rust software or the piece of Rust software that the most people depend on?

You've got:

* Every page load of Firefox

* Every page served of Reddit

* Every byte stored in dropbox

* Every user of 1password's browser extension, and every user of their Windows client.

* Every HTTP/3 request served through Cloudflare (okay HTTP/3 is still a baby but, the future here)

* Every DNS request of every user of the 1.1.1.1 mobile app (hundreds of thousands of installs)

* Every invocation of AWS Lambda and Fargate

* 4% of Debian packages

* Every control-F in Visual Studio: Code

* Every time a user list changes in Discord, also other things.

... so, yeah. Take your pick.


Interesting. I thought that only find in project (ctrl shift f) was using ripgrep, but I kind of thought that find in current file (ctrl f) was done differently.

Find in project is really awesome in VSCode.


I thought they both did, but I'm also not an expert on Code's internals.


AIUI, VS Code uses Node's regex engine for searching in the current file. But I am also not knowledgeable about Code's internals. :-)


I'll pick using an example from Discord as being a rotten apple in the list. The less this malware is mentioned the better.


I don't think Discord is malware, but we have straight-up 100% malware too! :p https://news.drweb.com/show/?i=10193&lng=en


To be fair the cancerous part of it is the Electron JS frontend, which is not to my knowledge at all implemented in Rust.


> we should see the Unix coreutils written in Rust (with all the command line flags working exactly the same way)

While I'm not convinced that implementing an identical API to coreutils is necessary to be "taken seriously" as a systems programming language, there's a fairly far along implemention that already exists:

https://github.com/uutils/coreutils/blob/master/README.md


> There isn't a lot of good software written in Rust, open source or not. It's a lot of small and half-finished stuff here and there.

Perhaps you and I have very different definitions of "a lot" or of "good", because I don't agree with this at all. There are plenty of high profile Rust projects with excellent production track records. Linkerd, TiKV, and Firecracker (originally crosvm) come to mind immediately, and of course Servo. Facebook also selected it for Libra, Google for Fuchsia.


> half-finished

Literally all of the things you mentioned (except linkerd which is mostly Go) are half-finished incubator projects. Rust has been around for over 10 years. Come on.


AWS blog on Firecracker, November 2018 (source: https://aws.amazon.com/blogs/aws/firecracker-lightweight-vir...)

> Today, Lambda processes trillions of executions for hundreds of thousands of active customers every month. Last year we extended the benefits of serverless to containers with the launch of AWS Fargate, which now runs tens of millions of containers for AWS customers every week.

> Battle-Tested – Firecracker has been battled-tested and is already powering multiple high-volume AWS services including AWS Lambda and AWS Fargate.

That doesn't sound like a half-finished incubator project.


There is literally no Rust in the main repo:

https://github.com/linkerd/linkerd2 https://github.com/linkerd/linkerd

It's confounding how a project that doesn't include Rust is included in Rust's "Greatest Hits."

Citing a cryptocurrency that...for all intents and purposes, doesn't really exist right now, is also a strange choice.


Linkerd consists of a "control plane" and a "data plane." You linked the the control plane, written in Go. You want the data plane, here: https://github.com/linkerd/linkerd2-proxy


Ah, interesting! Thanks for chiming in!


Using language age as a definition of success isn't really being fair. Plenty of very successful languages took a long time to gain traction. For instance, Python was in a very similar position to Rust for almost 15 years before it finally started to shoot up into what it is today. In 2000 you were considered eccentric if you chose to use Python for anything but the most dirty scripts, but now it's being used for complex production systems.

I'm not blind. I can see that Rust has its major downsides, and the zealotry you see is on HN and elsewhere is downright annoying. But dismissing it based on age is being disingenuous. There was another list of projects elsewhere in this comment thread which listed a number of things being used in production that can't be considered half-finished by any measure.


Just my two cents: there are lots of Rust zealots in HN right now. Rust is at the top of its hype cycle rn, so it's a bit difficult to have balanced discussions even here. You're completely right with that Rust isn't one of the main language that drives the industry yet, but everyone will jump on you with tons of partial examples, and will claim it's already flying. Sigh.


It seems to me like the only point of difference between you and the zealots is the definition of success.


Rust sounded like it was close:

“ Rust is not supported for end-developers.

Rust is approved for use throughout the Fuchsia Platform Source Tree, with the following exceptions: kernel. The Zircon kernel is built using a restricted set of technologies that have established industry track records of being used in production operating systems.”

That’s better than Go, which is not supported.


The majority of the code in the project is already written in Rust. If we're judging Rust based on its acceptance in Fuchsia for some weird reason, it's doing very well. Doing the best, in fact.


I'm sorry your comment is in the grey here. I think if you'd just stuck to: "If we're judging Rust based on its acceptance in Fuchsia for some weird reason, it's doing very well.", It'd have been better received.

Calling it the "majority" or "doing the best" is probably coming off as disingenuously impling "most significant". Sloc dosn't meet anyone's idea of that.


That's probably in part demand driven? I.e., there's little need for rust to be supported officially for end users.


Moreover, C is supported for end users, and if C is supported then Rust is de facto supported though C bindings, which Rust understands natively.


Someone will have the fun of writing those wrappers, because Google won't do it.

Likewise a future Fuchsia Studio won't support templates, debugging, or OS libraries written in Rust.

And the Fuchsia team most likely won't prioritize toolchain bugs related to Rust.

This is the biggest difference between using an official SDK language and guest languages.


Rust doesn't have a stable ABI, so it's sensible to always go through C bindings anyway. The other things you mention are just a matter of what their future Fuchsia Studio chooses to support.


Pretty much every con in Rust is some variation of (we | others) don't use Rust enough. Interesting to see!

Would have assumed the Kernel would be where Rust truly shines, but that's where it's blocked, which is... interesting!


Given that they already have a microkernel written in C++ (zircon is derived from littlekernel), and they're trying to move as much as possible outside the kernel, it makes sense that (for the time being) adding a new kernel language isn't on the table.


It's worth noting that Go and Dart were characterized as "highly productive" whilst Rust wasn't.


Absolutely, I don't think any sane Rust zealot would argue that Rust can compete with Go/Dart productivity. The argument can be made that Rust code has a lower maintenance cost over time, though, and that productivity may not drop as much as the system becomes more complex.

Even with this relative improvement, though, I question if it's enough to overcome the shorter compile and test loop the other languages have, or the mental overhead of managing lifetimes and ownership.


Rust binary size can be pretty painful, depending on what you're doing. Also, it doesn't look like that document is receiving regular updates, so grain of salt and all.


> it doesn't look like that document is receiving regular updates

It was checked into git yesterday.


Good to know! I'll look for an OS written in Rust elsewhere then. Fuchsia is interesting, but could be much more so.


If you want to learn to write an OS in Rust, checkout this series: https://os.phil-opp.com/

For an os project fully in rust and targeting end users, check out: https://www.redox-os.org/


I'm not very happy to see C++ infecting more and more system software when it STILL IN 2020 doesn't have a stable FFI/ABI.

This is going to bite us ALL in the future because it will saddle other languages with a useless set of constraints long after C++ gets removed from a project.


With the exception of the change to std::string in C++11, which was very, very carefully worked around so that C++11 code can handle C++03-ABI std::string, there have been no changes to the ABI implementation since the Itanium ABI was adopted by gcc almost two decades ago.

How is that not a stable ABI?


That's not a C++-ABI but a C++-as-compiled-by-gcc-ABI. C++ itself does not define an ABI and different compilers (sometimes even from the same vendors) will use different incompatible ABIs.


It is the linux standard C++ ABI as the defined by the linux standard base. An ABI for low level language is necessarily (OS, architecture) specific, so you can hardly do better than that. There is no ABI that could be usefully defined at the standard level (and even if it somehow were, it would be mostly ignored[1] as compilers wouldn't break compatibility to implement it).

[1] I could see the committee standardizing some intermediate portable representation requiring installation time or even JITing in the future though.


It is not the linux standard C++ ABI, it's just the defacto standard ABI because of gcc's former dominance and clang intimidating the ABI. And I broke things in the past, where I had to recompile stuff, due to different compilers (clang, clang+libc++, gcc in different -std=c++ modes) producing not 100% compatible outputs.

You can say it's good enough (most of the time), but it isn't really a standard, unless I am mistaken.


The Itanium ABI it is not just whatever GCC does; while it is not an ISO standard, it is an intervendor ABI documented independently of any compiler implementation and changes are agreed among compiler teams. It is continually updated to track the C++ evolution.

The standard library ABI it is not covered the the Itanum ABI (outside of some basic functionality), but it is defined necessarily by the platform. For linux that would be libstdc++.

The LSB references the Itanium ABI and defines libstdc++ as the ABI for the C++ standard library on linux platforms; it is again not an ISO standard, but it is as close as you can get on Linux.

And of course the C++ ABI being a very complex and both the ABI document itself and compilers have bugs from time to time, especially if you live close to the bleeding edge.


https://uclibc.org/docs/psABI-x86_64.pdf page 106 cares to disagree with:

> 9.1 C++

> For the C++ ABI we will use the IA-64 C++ ABI and instantiate it appropriately.

The Itanium ABI is the official C++ ABI on Unix systems. (Note that this same document officially documents the C ABI).


although it might the facto be, at least for AMD64, I wouldn't say it is the official standard ABI of all unix systems. But it is the standard ABI of Linux based systems, at least those that claim to conform to the LSB.


There's more to a practical language ABI than stack and vtable layout. If you write idiomatic C++, this means passing objects from the standard library around. If different compilers use different implementations of the standard library that aren't layout-compatible, things break.


On linux, libstdc++ implementation details are part of the official ABI.


The Linux processor-specific ABIs explicitly call out that the Itanium ABI is the C++ ABI for x86 and x86-64; ARM has its own ABI that differs from the Itanium ABI only in the exception handling details.


> How is that not a stable ABI?

Because it's an ABI with several severe constraints, especially around the more OO features and templates, plus the solutions introduce even more abstractions: https://community.kde.org/Policies/Binary_Compatibility_Issu...


All ABIs define similar set constraints, specially when the programming languages are a bit more expressive that pseudo-assembly.


> specially when the programming languages are a bit more expressive that pseudo-assembly.

Most higher level languages (java, c#, python) handle most of those much better, albeit with a different set of trade offs. Things like adding a private field to a class won't break binary compatibility in a c# apllication. C++ is fairly unique in that it tries to be high level and tries to be low level but the cost is it pushes the complexity of this split personality onto the developers.


C++ is moderately safe if one doesn't use it as a C compiler, even standard library does bounds checking across all major compilers, during debug builds and specific compilers switches.

There is a stable ABI inside some OSes, moreso than any wannabe C++ replacements.

Speaking of which, Rust is a very nice language, but still lacks many productive tooling that systems developers came to expect.

If anything this rationale is great input for the Rust community, how to improve the ecosystem.


FFI in C++ is never gonna happen with the preprocessor and templates being there. By design of the language it basically won't ever work. You would need to recompile the world unless you're artificially restricting yourself to the equivalent of extern "C". C++20 modules won't make it better either.

I don't think Rust, Dart, or Go are any better.

In practice it seems C++'s ABI is called "protocol buffers".


Basically correct, though in Fuchsia it's called FIDL. It's protocol buffers, but not the specific project that Google refers to as "protocol buffers".

https://fuchsia.googlesource.com/fuchsia/+/master/docs/devel...


Which is also used by Android.


It works perfectly fine with COM, specially after the COM 2.0 (aka UWP/WinRT) improvements.


They don't want to add loadable kernel modules (but instead go pretty strict microkernel), so what would an unstable FFI/ABI really matter?


It's much easier to progressively migrate away from a language if you have a stable ABI.


The goal is to get it to the size where it can be replaced all at once. For instance sel4 is only 10kloc.


Notice that they omitted the "highly productive" claim thy applied to Dart and Go.


Sorry, but that was not the only con. The other were that the design was unusual and it is not yet understood.

And not enough people are using it is devastating critique of a language - every piece of code written will be read by someone eventually. And you want that pool to be as big as possible.

To me it seems Google are cautiously bullish on Rust.


I agree with your summary.

My worry comes from the fact that a computer system of the scale of Fuchsia comes up every couple decades. I'd like not waste another one on C++ shortcomings and Fuchsia going with Rust was a great chance for a better dominant language in 20ties.


The microkernel is a pretty small component. It's like if 80% of the Linux kernel was Rust and 20% was C++.


compared to the serious criticisms of the others

Adoption and community are one of our top factors. It's a pretty serious one, IMHO.

EDIT: our as in our company. Not in any way affiliated to Fuchsia.


Seems like Go has more positives and the same negatives as Dart, but Dart is approved while Go is being scrapped in Fuchsia land.

Having said that, I wouldn't want to work on GUI applications with Go, while Dart did have some handy semantics when I tried it.

A rather low level of detail in this comparison though, would have liked something a little more in depth.


The big difference seems to be:

Con: The Fuchsia Platform Source Tree has had negative implementation experience using Go.

i.e. they tried it, because they're from Google and they basically had to, and it wasn't great.

Not surprising, honestly, given how opinionated (in the "users of this language are stupid" direction) Pike, et. al. seem to be.


Not "stupid" but "inexperienced".

Rather obviously, people who implement OSes are usually the opposite.


Specially given that even the Dart team managed to implement generics,....


> opinionated (in the "users of this language are stupid" direction) Pike, et. al. seem to be

Such a strong statement, wrapped in quotes to make it seem that this is Pike's literal words, should really be substantiated with a reference. Otherwise you're putting words in Pike's mouth that he never said.


“The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.” – Rob Pike


You have supported what I said; they were not his literal words.

Where does he say "users of this language are stupid".


They did not have a negative experience, this is not what's written.


What do you mean? It's a direct quote from the page.


Fuchsia is meant to be used in variety of contexts like IoT [sic], and embedded stuff, Many time Go doesn't play well in such areas.

People coming from language like C++ and senior enough to write OSes may be startled how inexpressive Go is.


Even Java that started as a blue collar language, or modern Basic dialects that started as introduction to programming, are more expressive than Go will ever be.


On the same line they mention memory / cpu usage, I don't read it as we had bad experience using the language but more than Go doesn't suit well xyz because of memory usage.


Hence the word implementation accompanying experience, which was not omitted in the citation you're responding to.


Why can’t it be both?


You might not agree, but the idea is that you should devote your cognitive resources to the problem you’re solving and not working around complexities in your language (such that brilliant developers such as yourself and “stupid” developers like me can be even more productive than we would be with C++ or Haskell or whathaveyou) or choosing stylistic standards for how contributors are to program (because the language is so expressive that the solution space is astronomical). “Go programmers are stupid” is a deliberately uncharitable interpretation.


You might not agree, but there are programming languages that, unlike Go, don't take a "programmers are stupid" opinion, that are incredibly productive. Here are some other ways you can make developers productive (just off the top of my head):

1 - give programmers access to powerful jedi tricks, but make those just annoying enough that novices aren't terribly tempted to build them and put them into prod (but not so annoying that they aren't tempted to play around with them not-in-prod and learn something about the runtime)

2 - make "doing the right thing" easy, like, tests, documentation, sane package management, cryptographic primitives, comments, tests, etc, also, did I mention tests? Tests should be easy and you should want to write them.

3 - Make tests blazingly fast and parallelizable. That means, you can write two (or more) tests that hit the database (or some other source of state) and it doesn't matter that they are operating on different views of the universe, they shouldn't collide.

4 - be opinionated about deployment, so that those rube goldberg tricks you have to do to put into prod are testable and reproducible.


> Tests should be easy and you should want to write them.

I must be using go wrong then because I've always felt they were easy and wanted to write them. The last big project I built with it had great test coverage.

> be opinionated about deployment, so that those rube goldberg tricks you have to do to put into prod are testable and reproducible.

Of all the languages I feel like go's deployment is probably the simplest, if it's hard for you you're probably doing something pretty wacky.


> Of all the languages I feel like go's deployment is probably the simplest, if it's hard for you you're probably doing something pretty wacky.

100% agree with this. Statically compiled binary is basically the easiest thing to deploy.


…until you update your OS, and your binary breaks because it tries to make raw syscalls.


Considering how infrequently this happens and how utterly trivial it is to recompile, I'd take this every time over complex deployments on the happy path.


Deploy isn't just dropping a binary into place, it's everything else. If you're in ec2, what sorts of vms or security group are you using, how do you talk to AWS, if you're on a bare Linux, are you putting in apparmor and fail2ban? If you're in eks, the reams and reams of yaml. What is your restart strategy for your process when it crashes... How do you connect your process to the cluster and instrument it with secrets...


I mean your mileage may vary on usage but we generally just build a new ec2 image, basically Amazon Linux + Go Binary, set the autoscaler to use that, and if we need the old one cycled out fast kill the older instances. We rarely need to do that, we try to avoid it.

We let the AWS autoscaler health monitoring just kill unhealthy ec2 instances. No need to worry about any sort of restart policy. It is VERY rare we actually get a Go process in an unhealthy state.

We’ve always handled banning in app, so that’s never been a consideration. Our rules around that are very complicated as we sell into institutions that have thousands of people behind a single IP address, so blocking a full IP incorrectly could mean institution wide downtime and possible loss of a client.

Our secrets get set with a little come-online script in the ec2 image. Secrets change? Kill the instances and autoscale more. Instrumentation is via REST api.

It’s basically the same way we deploy anything else but without worrying about library and language versions. Its reasonably simple, and been very easy especially compared to the previous ways we used to deploy apps.


The Fuchsia project has the opportunity to influence the evolution of the language.

I think this is a big one, even though they listed this for Go I'm curious if they actually believe it. I certainly don't.

For Dart they're potentially a significant stakeholder, for Go though I can't imagine them getting any significant changes through that don't benefit server side programming at Google.


Did you know fuschia is also a Google project?


It’s easy to underestimate how divided and chaotic a company with 20k full-time engineers can be. No company of that size moves or thinks as a coherent unit.


Go wouldn’t haven’t been suitable for Flutter because a major factor was hot reload with the AoT compiler. Something that won’t exist with Go.

I like Go, but Dart has been a pretty good language to learn.


I’m also disappointed that Go isn’t approved, but the UI framework is implemented in Dart, so it seems like an understandable decision to permit Dart.


It seems they’ll have an abundance of C and C++ developers because of the nature of writing a kernel. These developers can easily fill the niche that Go traditionally fills, probably with slightly better performance.


C and C++ are the only languages used in widely deployed production OS kernels, so Fuchsia decides to only use them in its kernel, too.

Nobody seem to want to stand out and use e.g. Rust in the kernel, so the situation is perpetuated. (How old was C again when it was used to write the Unix kernel? Apparently Google's stakes here are higher.)


I think that it's more that they already have a kernel, and their focus is moving as much as possible outside the kernel rather than adding to it.

And, FWIW, I can see a point where they have a kernel written in C or C++ that's formally verified (like sel4), at which point, what's the point of rewriting it in Rust? sel4's semantics are stronger than what Rust gives you out of the box.

I say this as someone who's written a handful of Rust kernels, and is quite bullish on Rust adoption.


IIRC sel4 was initially specified in Haskell, formally verified, then formally translated to C.


> While the Haskell prototype is an executable model and implementation of the final design, it is not the final production kernel. We manually re-implement the model in the C programming language for several reasons. Firstly, the Haskell runtime is a significant body of code (much bigger than our kernel) which would be hard to verify for correctness. Secondly, the Haskell runtime relies on garbage collection which is unsuitable for real-time environments. Incidentally, the same arguments apply to other systems based on type-safe languages, such as SPIN [7] and Singularity [23]. Additionally, using C enables optimisation of the low-level implementation for performance. While an automated translation from Haskell to C would have simplified verification, we would have lost most opportunities to micro-optimise the kernel, which is required for adequate microkernel performance.

https://www.sigops.org/s/conferences/sosp/2009/papers/klein-...


Neat, I didn't know about seL4. Can you point me to how they define the proof for it?


They've done a really good job documenting it in their papers and online docs, but the general flow is to verify equivalence of the generated binary and the formal specification, then to prove properties like memory safety of the formal spec.

I'd start here if you want to learn more: https://sel4.systems/Info/FAQ/proof.pml

Let me know if you have any other questions.


FOSDEM speach video [1] about the status of seL4, I really enjoyed.

[1]: https://fosdem.org/2020/schedule/event/uk_sel4/


> what's the point of rewriting it in Rust

The fact that you don't have to write C and also that you can have different API semantics. E.g. you can pass user space code (say callbacks) to the kernel while maintaining safety.


Bad idea in the wake of Spectre and meltdown. You want user code in a different address space for those reasons these days.

Unless you mean more like using the user callback in the same way, say, a posix signal handler works, or Microsoft's SEH or APCs, then I would say it's already possible with a C ABI, citing those things as examples.


I don't see a world where they allow "run this user code as ring 0".


I do. If it's a callback and all arguments are passed in, it's fine.


It can literally take control of the machine. Even if it's written in Rust, there's no "this binary used an unsafe block" on binaries. If they're signing and proving everything, then what's the purpose of sticking that code in user space to begin with?


There does exist a precedent in tha mainframe world of trusted compilers and mechanisms of verifying that a piece of code was produced by the blessed compiler.

I guess the embedded version of this would have to be an offline compiler & code signing based system, and the language would need to be much more sandboxy than Rust.


I know maybe you are writing a driver or smth.


> C and C++ are the only languages used in widely deployed production OS kernels, so Fuchsia decides to only use them in its kernel, too.

Frankly - who didn't see this coming. The widely touted network stack written in Go will be rewritten in something else.

> Nobody seem to want to stand out and use e.g. Rust in the kernel, so the situation is perpetuated

Redox does, but it's still early days there apparently. I tried to boot it in VirtualBox a few minutes ago, following some instructions on StackExchange[1] but it didn't boot.

> How old was C again when it was used to write the Unix kernel?

Very young, as C was developed between 72-73[2], and the first rewrite of UNIX in C from PDP assembly happened in 1973 for V4, although it wasn't really _portably_ rewritten until 1978[3]). The C rewrite was the first version to support multi-programming; before that it was single-tasking.

(maybe you know all this, but I post the info to clarify for other readers who might not, and to underscore your point).

Also don't forget that UNIX source was reasonably available in those days, and both C and UNIX developed alongside the much simpler hardware of the day, and evolved with them. Hitting modern hardware targets, trying to provide modern OS features out of the gate, and using a newly emerging language seems to be a feat which is an order of magnitude larger IMHO.

[1] https://unix.stackexchange.com/questions/463192/install-redo...

[2] https://en.wikipedia.org/wiki/C_(programming_language)

[3] https://en.wikipedia.org/wiki/Unix#History


> Redox does, but it's still early days there apparently. I tried to boot it in VirtualBox a few minutes ago, following some instructions on StackExchange[1] but it didn't boot.

Worth giving it another go. I've been playing around with it in QEMU lately, and it even booted on my laptop (but I had no working input since it seems USB HID isn't implemented). It's an impressive project.


Remember the kernel here is pretty small. A lot of stuff that's built-in to the kernel in e.g. Linux is implemented outside the kernel in Fuchsia, and a lot of that is implemented in Rust.


Might have more to do with them starting with a stable microkernel as a base. They picked LK.


Why aren't you writing a kernel in Rust?


Mostly because I'm not qualified to write serious close-to-metal code (I majored in embedded systems 25 years ago, but gravitated up the stack), and don't have enough time for that as a hobby.

The day is only 25 hours, or maybe even less.


Definitely sad to see Go get blacklisted and put on the “eventual replacement” list. The reasons, like Dart’s, make sense, but it’s still gotta be kick in the teeth for the Go team. I wonder how difficult it was for them to use it and if it was during one of Go’s “transitional periods.”

I wonder if Rust/Elixir will be the one to eventually replace it.


One of the problems they've mentioned, large binaries, is being tracked [1] for the past 7 years and it's getting worse with every release.

Not to mention compile times are still much longer than what they were pre-1.5 when the compiler was written in C.

[1] https://github.com/golang/go/issues/6853


To be fair to Go compile times, with them using C++ and Rust, that wouldn't be a show stopper.


> Pro: The Fuchsia project has the opportunity to influence the evolution of the language. (RUST)

While this may be a Pro for Google, is it also a Pro for the users? Does this mean that Google would hold (if not already) a chair in some "foundation" and decide which features goes into the language (and how they will be implemented)?

If this is the case, I don't like it very much... That would be a Pro for C, or whatever language out of their reach (as in The Fuchsia project doesn't have the opportunity to influence the evolution of the language).


While the other commentors are correct that there's no Rust foundation, it is true that a member of the Fuchsia team is on the Rust language team, which does decide the direction of the language.

It was very much a pro for our users; like any open source project, we need contributors who are willing to help do the work. Fuchsia's experience with async/await was really important to validate that the design worked well; for example, many people think of it as a feature that's useful for web servers only, but Fuchsia demonstrated its validity in other contexts. Beyond semantics, it also helped with syntax; https://github.com/inejge/await-syntax was created during the debate about "prefix await," and was able to show us what a few variants of the syntax would look like in real-world code.

> If this is the case, I don't like it very much... That would be a Pro for C, or whatever language out of their reach (as in The Fuchsia project doesn't have the opportunity to influence the evolution of the language).

C has a standards process that anyone can get involved in, following ISO rules. Google is a major player in the C++ standards process too.


Thank you for the explanation. I wasn't so wrong after all.

Perhaps it was the wording that made me think that way... "To influence". I would rather prefer as you wrote; to collaborate or to contribute.

Time will tell...


There is no Rust foundation. All of the design and development happens in the open on Github, a public discourse instance, and a few chat programs (zulip, discord, matrix). I believe the Fuchsia project was quite involved in the design of the async-await feature, along with the team behind `tokio` and others.


Interesting! Async-await is mentioned as a "pro" for Dart and Rust ("Asynchronous programs can be written using straight-line code"). Seems they really like async-await, and they had more success getting Rust to support it than they had with Go - which might have led to some Google-internal strife?


Sorry, I thought I read somewhere that a foundation might be started eventually.

Edit: here: https://news.ycombinator.com/item?id=22008887 but I think it's not official.


It’s really irritating to see platforms picking the programming languages that can be used for real applications: why should the Fuchsia developers decide that I can’t write my application in Go/Rust/Lisp/whatever? Just provide a sandbox and a platform spec and allow third parties to build whatever tools/languages make sense.


This isn't saying that you are not allowed to use some other language, just that you might be on your own wrt an SDK etc.

FTA: "Supported for end-developers means that the Fuchsia SDK contains tools and libraries that help people use the language to develop software for Fuchsia, including a language-specific backend (and supporting libraries) for FIDL. Support also implies some level of documentation, including tutorials and examples, as well as investment from developer relations."


So when you read something like:

> Go is not approved, with the following exceptions:

>>>> netstack. Migrating netstack to another language would require a significant investment. In the fullness of time, we should migrate netstack to an approved language.

> All other uses of Go in Fuchsia for production software on the target device must be migrated to an approved language.

Does that not imply that okay sure use whatever language you want, but production software (that exists in repositories?) must adhere to approved languages?

I could be misreading


> Does that not imply that okay sure use whatever language you want, but production software (that exists in repositories?) must adhere to approved languages?

> I could be misreading

To me, that reads: platform devs, i.e. people developing Fuchsia itself, must adhere to the approved language list. Which is not uncommon in projects of all sizes from small to large.

I think the wording is a bit weird because Google is expecting 3rd party device manufacturers to make modifications to the OS and expects them to adhere to the approved language list as well.


I don’t read it as requiring production software for end-user applications requiring specific languages. I think I’m correct in assuming that the OS level calls, bindings, and built ins will only be provided with approved and supported APIs for the approved languages. I also think that’s what the GP comment was saying. I think the result will be language specific shim layers that are required by language/implementation maintainers. But I’m just making a guess.

Edit: had I read the quote from a comment not far below before posting, I would not have made these assumptions. That quote does sound like applications will be language restricted.


> Does that not imply that okay sure use whatever language you want, but production software (that exists in repositories?) must adhere to approved languages?

> I could be misreading

That’s for “production software” (as opposed to tooling I’d guess) in / of Fuschia itself. They’re saying that aside from netstack & stuff that only runs on the dev machines everything that’s in Go in the Fuschia repository must be migrated ASAP to an approved langage.

For end-developers it’s the same status as eg Rust: you’ll be on your own with no support from the fuschia project.


This document is about the use of languages within the project itself.

Fuchsia, as a fundamental principle, supports "bring your own runtime" -- if you are a PIE ELF executable and can dynamically link against libzircon.so (the syscall ABI) and speak the platform's RPC protocols, you are a Fuchsia app.

The Fuchsia IDL compiler is designed to support third party backends for other languages beyond the core platform languages.

I've moved on to other things, but I'd be very surprised to learn this had changed.


> This document is about the use of languages within the project itself.

Only in part; they also specifically mention whether or not they will support use of each language by end-developers. (As you say, end-developers can use whatever they want, as long as their language has a C-ABI FFI, but that's not the same as being "supported".)


Isn't this just a list of languages allowed to use when writing Fuchsia? I don't see how they would ban users from writing their apps in say Go or Clojure.


> This document describes which programming languages the Fuchsia project uses and supports for production software on the target device, both within the Fuchsia Platform Source Tree and for end-developers building for Fuchsia outside the Fuchsia Source Platform Tree. The policy does not apply to (a) developer tooling, either on target or host devices, or (b) software on the target device that is not executed in normal, end-user operation of the device.

As far as I can tell, this is supposed to be “the” languages that can execute on a Fuschia-powered device.


Support means all APIs have first class support by Google, just like how Kotlin has first class support on Android.


They have no means to enforce that. They're talking about what they provide support for, not what is possible.


If you control the App Store, you can enforce a policy like this. And, the whole security model of Fuschia seems great for enforcing something like this: use object capabilities to grant access to everything and don’t provide a stable API for getting those capabilities in unsupported languages.


Unless you have the developers submit source code, it's not really possible to determine what language the source code was written in (if the developer doesn't want you to). For example, if there is a C API, I could write my code in Rust and with only a little effort, have it compile to something which could also have been compiled from something written in C. Even for languages with a more substantial runtime, you could "transpile".


The thing is, you don’t need to make it impossible, just difficult and officially say that it’s “unsupported” to make it a non-starter.


That’s sadly exactly what this means, for now at least: https://fuchsia.dev/fuchsia-src/concepts/api/council

“Supported”, however, is a different word than “allowed”.


They support C, so anything with a FFI to C can be used, just Google will not give people tooling for it or support it.


As a user and a developer, I have zero interest in Fuchsia. Between Android patents and Oracle lawsuits, the only problems Fuchsia solves are Google's problems.


Do you ever wonder why Android phones don't get updates to new Android releases? It's the kernel and all of the vendor-specific hacks that makers do to the kernels they ship. A microkernel could make a big improvement on that problem.


We should be encouraging vendors to release the source to their binary blobs instead of making it easier for them.


This has been the mantra since the first proprietary drivers for Linux. They still don't do it, so we can keep banging head against that wall, or try something else that would actually work.


Are there any major products/services where Rust is being used besides Firefox right now?


There are. Here is a copy of my notes on the subject:

https://gist.github.com/brson/9422a92791062ac52d9e08f0ba7d48...

Sorry it's not more organized and detailed.


Another recent big user is 1password, both on the Windows desktop and in the browser with wasm. Great notes!


just a sample: dropbox, cloudflare, canonical, sentry, atlassian, npm, ibm, xero

https://www.rust-lang.org/production

https://www.rust-lang.org/production/users


I know Discord is migrating a lot of their stuff from Go to Rust. Not sure how major you consider Discord.


The Mullvad VPN app is written in Rust.


a very small part of ChromeOS is written in Rust...(mainly in Linux apps support, aka "crostini", https://chromium.googlesource.com/chromiumos/platform2/+/mas... for instance as well as other bits).


I'm compiling a list of companies using Rust (https://github.com/omarabid/rust-companies). Probably the next step is to rank companies by how much they are really using Rust and linking products that are using it.


Some: https://www.jonathanturner.org/2018/07/snapshot-of-rust-popu...

As far as "using it in a substantial portion of their product", dunno.


Hardly major, but our[0] entire stack except the frontend is Rust.

[0] https://dtmf.io/


Lets wait until they actually have devices and OEMs lined up, which may very well be never. I can't imagine e.g. Samsung being very excited about buying into this new Google walled garden. And this our way or the highway type policy is not going to help. So, that means they are either looking at long term maintaining both Fuchsia and Android, or a hard fork of the Android ecosystem by one of several third parties. And I doubt Google is going to walk away from huge chunks of mobile device market share any time soon.

In any case, Kotlin is the obviously superior language compared to Dart that they already have a lively developer ecosystem for. And it has a native compiler that the before mentioned ecosystem already uses to not be stuck using Dart when writing IOS and Android libraries that can be used from a flutter UI.

I don't see any major technical hurdles for also having bindings for Go, Rust, Swift, or other languages in the llvm ecosystem. Obviously the whole thing has native bindings (flutter basically transpiles to C++). Swift support would be a major enabler for bringing in IOS developers. Come to think about it, WASM and flutter could be a nice marriage as well. And yes, I've heard the arguments why all this is not possible on multiple occasions from flutter fan boys. IMHO that's a problem that needs fixing and not an inherent design limitation. It's a prioritization problem. It boils down to Google being stubborn and arrogant. It boils down to not invented here syndrome.


The con arguments for dart and go are quite similar. I cannot really derive the decision about the support status based on these arguments alone. To me this sounds more like personal reasons.


Elsewhere they have mentioned the UI is written in Flutter, which is completely Dart based. It seems as if this is what tipped the scale. Both are Google-developed languages, but one seems to be closer to the project.


They're used for pretty different things. Dart with Flutter is used for the UIs, which Go wouldn't really do well at, and for system services that's where the con list really comes into play.


I saw that too, but there is another bullet point about the implementation experience of trying to use Go to implement some system services. Sounds like they had a bad experience.


For Dart:

> Pro: People using the language are highly productive.

Ok you convinced me.


I’m not sure if you’re being sarcastic, but to management in enterprise organisations this is the single most important feature of a programming language because the most expensive resource you have is your programmers.


My interpretation of the comment is that it is criticizing the somewhat baseless claim. I find it hard to believe that even internally at Google the statement is considered an indisputable truth. I agree programmer productivity is a complex management challenge that everyone would be happy to have a silver bullet for.


The interesting bit is at the end : they really don't like Go, a language that was developed at the same company.


It's not that they don't like Go. Go isn't designed for projects like this in the first place. Dart and Go are much more biased than C/C++ and Rust, and, even worse, Go is biased towards building services.


> [...] they really don't like Go

Can you cite the part you are referring to? The only negative parts they state are used memory resources and large binaries, which is really not desired on an embedded device, but usually totally neglectable on any dedicated cloud-computing hardware.


I am inferring this from the language they decided to use.

They know that the Go team are going to read this : "The Fuchsia Platform Source Tree has had negative implementation experience using Go...". They could have sugar-coated a lot more, but chose not to.


IIRC last official news I heard from Fuchsia is that it was an experiment to test OS features.

Is this is still the official position?


I don't know if there's been any official change, but I think it's pretty obvious it's going to be used in some form or other. Unless the project ends up being an irredeemable failure (e.g., fundamental design decisions make it not performant enough, insecure, etc), there's no reason for them to just drop it. It solves too many of Google's problems with regard to Android and Chrome OS.


I agree, I always expected it to replace Android and Chrome OS into a single OS.


Basically... Dart and C++ only?


That is really far from an accurate summary of the document.


Pretty much, dart-ffi is used for C/C++ interop. Rust is mainly just mem-safe business logic.

Dart is a lovely language and with Flutter you can target any OS/device.


With growing popularity of Python inside and outside of Google, any idea why it is not in the policy?


Is this doc no longer valid? https://fuchsia.dev/fuchsia-src/development/languages/new

It's worth noting the distinction between the Fuchsia API Surface (consumed by End-developers) and the Fuchsia System Interface here: https://fuchsia.dev/fuchsia-src/concepts/api/council#definit...


> Con: Rust is not a widely used language.

and then:

> Decision: Rust is not supported for end-developers.


I cant help but think that "highly productive" here is not really a super scrutinised reason.


No stated support for Java and Kotlin? This is meaningless and show that fuschia is run by vaporware people.


Really? There no support for those on iOS either and it's hardly vapor ware.


Well, technically speaking there's Kotlin support https://play.kotlinlang.org/hands-on/Targeting%20iOS%20and%2...


[flagged]


Now the size of an ecosystem might be a valid reason to consider something vaporware, rather than an arbitrary choice of programming language.

Although if anything iOS proves you can build a billion dollar software ecosystem on a niche language: Objective-C.

Also, are you always this rude or is it just internet comments that you disagree with? Not only does this site have guideline, the people you are writing to are people, like you, not just pixels on a screen.


Although if anything iOS proves you can build a billion dollar software ecosystem on a niche language: Objective-C.

Here the key information is that you can build a billion dollar software ecosystem for smartphones with any turing complete language, IF you are the first on the smartphone market. Such a situation will no longer happen in mankind history.

Also, are you always this rude or is it just internet comments that you disagree with? Firstly I must say that I'm pretty tired of low intellectual efforts comments, especially when they increase or maintain a common erroneous/suboptimal opinion. As such, I highly consider information pollution and I want to protect people from this and their cognitive biases.

Also, It require a big chunk of wisdom, but if you really think about it: Did your brain forgot to take into account this little information? Is not rude at all. This statememt is true and I didn't use imperative mood but conditional grammatical mood. Also some truths hurts ego indeed. But being hurt by information showing that our brain has limitations is irrational, we as humans make thinking errors as a daily routine, myself included. Such information allow you and readers to progress (iff they bypass their ego and the backfire effect). Personally I exclude myself from my past brain and my statements, I visualize them as beings outside of myself so when someone attack (without fallacies) what I said, they didn't attack ME, they attacked only what I said: that is, an intellectual product that could have been malformed due to limitations such as lacking knowledge or cognitive biases or presence of logical fallacies.

My comment could show you there is a path to intellectual progress and I could give you some useful links (such as lesswrong.com ) if you're interested in it.

Thanks for reading, human not made of pixels.


With the easiness that Google kills anything that isn't search, it wouldn't surprise me.


Oh it saddens me so much that Rust is not approved for kernel development, oh well.


Remember, Fuchsia is a microkernel, so a lot of the stuff that would be in the kernel in a monolithic kernel is in Rust here.


> Rust is not supported for end-developers.

It's unclear where this restriction comes from. Also, it is quite sad to see that they use and allow C for such a new and innovative project.


I believe it just means they don’t intend to provide a standard toolchain and SDK for Rust. That seems fairly reasonable given that Rust doesn’t have a stable ABI (https://github.com/rust-lang/rfcs/issues/600). Making sure Rust apps are forward-compatible with new OS releases could be difficult. In comparison, C has the best ABI story and can easily be used to enable other languages at the application level.

Apart from that, they seem positive about Rust and they’re willing to use it for internal code. Note that they discourage further use of C for internal code.

I was a little surprised not to see slow compilation times listed as a con, but I guess that’s a trade-off they’re explicitly willing to make for a kernel that runs fast.


Fuchsia exposes services via FIDL, so the lack of stable ABI isn't as big of a deal.


Not supported, but I expect that some enterprising individuals, if there's enough interest, will write some rust bindings. They'll support a C ABI for end-developers, and rust's C interop story is just fine.

And there's no telling about the future; the Fuchsia team is certainly allowed to change their minds later and add supported SDKs.


What kind of OS has "approved langues?"

Geese, this is the sort of crap I'd expect from google but it's weird to see it formally declared.


All of them. Linux has one: C.

For clarity, this is talking mostly about what languages they will use when writing the OS itself (that’s what “approved” means in the article). They do also touch on what languages are supported (I.e. they’re building the infrastructure to allow it to happen) in userspace, but they’re not saying you can’t run other code on the OS.


I don’t know that your take away is correct, I was under the same assumption the these specific languages were for OS code, but from the article:

“ This document describes which programming languages the Fuchsia project uses and supports for production software on the target device, both within the Fuchsia Platform Source Tree and for end-developers building for Fuchsia outside the Fuchsia Source Platform Tree. The policy does not apply to (a) developer tooling, either on target or host devices, or (b) software on the target device that is not executed in normal, end-user operation of the device.”

This seems to say that software executed by end-users is only going to be ‘supported’. As was stated earlier in a comment, there is a difference between ‘allowed’ and ‘supported’, but I don’t understand why they are taking such a general hard line about the support of it is just kernel specific.

I’m still going to maintain the assumption that the OS is only going to be natively usable in the approved languages and that all others will require individually maintained shim layers.


You're incorrect. This document is describing languages used to develop the platform itself (and a bit about priority of external support for languages). The Fuchsia platform was designed to be language agnostic, bring-your-own-runtime.

at least as of when I last worked on it, but it has been a core principle from day one that I would be shocked to see abandoned.


> This seems to say that software executed by end-users is only going to be ‘supported’. As was stated earlier in a comment, there is a difference between ‘allowed’ and ‘supported’, but I don’t understand why they are taking such a general hard line about the support of it is just kernel specific.

Support for end-developers means provision for built-in API and affordances. They won’t stop you from using something else for your Fuschia applications but you should not expect much if any help in getting it working or whatnot.


A lot of Linux OSes define protocols and interfaces with matching implementations. The kernel has syscalls, the GUI has a socket protocol etc.

Sure the individual components pick a language but there's no singular language applications are "supported" for (except libraries like GTK which you don't really need), that's how we get nice things like TCL/TK and guile.


In Unix, C is THE API.


This list is about two different things: what languages are allowed in the project, and what languages are supported by end-developers.

I imagine that an end-developer could write their Fuchsia program in any language they want, but if it's not on this list, Google won't provide documentation or bindings for it.


All of the current mobile OS. I also prefer seeing it formalized than hidden.


Those are models now??

I'll trade my laptop for an abacus before I trade it for a pixel.


C:

>Programs written in the language often have security bugs arising from the language’s lack of memory safety.

How often?

“Have things changed now?: an empirical study of bug characteristics in modern open source software” suggests that 8.8-17.2% of security bugs are caused by memory bugs. How many of these can be caught by better testing?

I think the effects of memory bugs on security are often overstated.


> Speaking at the BlueHat security conference in Israel last week, Microsoft security engineer Matt Miller said that over the last 12 years, around 70 percent of all Microsoft patches were fixes for memory safety bugs.

https://www.zdnet.com/article/microsoft-70-percent-of-all-se...


It seems like the Windows ecosystem hasn’t benefited from the improvements made by the research community. For example, Valgrind doesn’t run on Windows. Cross platform applications (like the ones studied in the referenced paper) don’t have nearly the same level of memory problems. I think that presenting the 70% figure as inherent to developing in C/C++ is misleading because of this. In fact, a brand new project could probably reach 0 (or very close to it) memory bugs in C++ by following modern testing practices and using the variety of dynamic and static analyzers that exist today.


Microsoft has lots of state-of-the-art dynamic and static analysis tooling for Windows. You don't hear much about it because a lot of it is closed source. E.g. here's some info about some of the static annotations they use in the kernel: https://docs.microsoft.com/en-us/windows-hardware/drivers/de... The links in the sidebar point to a lot of other stuff. If you look at the publications of MSR's software researchers, many of whom are very good, you will see lots of papers about finding bugs in Windows, some of which have been productized.

> In fact, a brand new project could probably reach 0 (or very close to it) memory bugs in C++ by following modern testing practices and using the variety of dynamic and static analyzers that exist today.

A bold claim to offer without evidence. Unfortunately even the best organizations have so far failed to achieve this.


> Microsoft has lots of state-of-the-art dynamic and static analysis tooling for Windows.

Right. If you look at the linked article, the Microsoft Engineer claimed 70% of security bugs in Microsoft products are caused by memory errors. Does Microsoft apply the same tools to all their products or only Windows? Do these tools even exist for other products?

> A bold claim to offer without evidence.

If one writes a new C++ program, tested with > 75% code coverage, tested with valgrind, the program passed coverity checks and clang static analysis, and they followed the best practices for hardening the host kernel, and told me that they still had an exploitable memory bug, I would be surprised. Notice that performing all those steps is still less effort than learning Rust and building the program in that. And you’d still have to harden your kernel and test anyway.

The evidence? NGINX and Linux is written in C. If the situation was so dire, why isn’t every computer in the world compromised right this second?


Look no further than the browser you're likely using: Google Chrome.

There's 91 code executions and 121 RCEs, details here: https://www.cvedetails.com/product/15031/Google-Chrome.html?...

And the project has some of the best testing and practices in the world. Constant fuzzing, significant test coverage [0], no doubt there's memory sanitizers, etc.

It's increasing clear that large projects written in memory-unsafe languages will contain memory unsafety.

> The evidence? NGINX and Linux is written in C. If the situation was so dire, why isn’t every computer in the world compromised right this second?

Nice hyperbole. Check the stats [1].

[0] https://analysis.chromium.org/p/chromium/coverage

[1] https://www.cvedetails.com/product/47/Linux-Linux-Kernel.htm...


>Nice hyperbole

Not hyperbole. Most of these bugs are never known to be exploited by attackers.

>Check the stats

In your first link, there was one memory corruption vulnerability in Chrome last year. If we're looking at RCEs, CVE-2019-5762 and CVE-2019-5756 appear to have the same root cause (a memory bug), and CVE-2018-6118, CVE-2018-6111, and CVE-2017-15401 (which is also the memory corruption vulnerability) are also memory bugs. So it looks like Chrome had ~4 serious memory vulnerabilities last year.

Don't have time to dig right now, but it appears similar observations hold for [1].


> Most of these bugs are never known to be exploited by attackers.

You have moved the goalposts. Of course there are lots of reasons why a bug might not be exploited by attackers, e.g. "the attackers exploited some other bug" or "no-one uses that software". That is not reassuring.

> In your first link, there was one memory corruption vulnerability in Chrome last year.

I don't know how you determined that, but it's just wrong. https://www.cvedetails.com/vulnerability-list/vendor_id-1224... Bugs 2, 3, 4, 8, 9, 10, 14 and 15 are obviously memory safety vulnerabilities. Many of the others probably are too, if you dig into them.


Or that the exploit is so difficult it is practically impossible to attack.

>but it’s just wrong

Who’s moving the goal posts now? The parent was talking about vulnerabilities, not bugs.


> Or that the exploit is so difficult it is practically impossible to attack.

"That bug is so difficult to exploit, it is practically impossible to use in an attack" does not have a good track record in the face of determined and ingenious attackers. Worse, once the attackers figure out how to overcome the difficulties, that knowledge spreads and is often packaged into kits that make it easier for the next bug.

> The parent was talking about vulnerabilities, not bugs.

I have no idea what you're talking about. Bugs 2, 3, 4, 8, 9, 10, 14 and 15 in that list are serious memory safety vulnerabilities that were found in Chrome last year, contrary to your assertion that Chrome only had four last year.


> If you look at the linked article, the Microsoft Engineer claimed 70% of security bugs in Microsoft products are caused by memory errors. Does Microsoft apply the same tools to all their products or only Windows? Do these tools even exist for other products?

They recently released an AddressSanitizer port for MSVC, and they've had Valgrind-like functionality for Windows userspace for over a decade (see https://www.usenix.org/legacy/events/vee06/full_papers/p154-...), but I don't know of any public source describing what tools they use across their product range, so I don't know. They're well resourced, well motivated, and not stupid, so it would be surprising if they don't use the technology available.

I know that highly capable organizations, e.g. the Chrome and Firefox teams, do use state-of-the-art tools and practices in their browsers and get similar results to the Microsoft 70% number.

> I would be surprised

Check out Firefox and Chrome, for example, and be surprised.

> learning Rust

This isn't about Rust, but FWIW learning Rust doesn't seem so bad when you compare it to just the learning required to keep up with the ever-growing complexity of C++. (See e.g. Scott Meyers refusing to handle errata for his books because his C++ knowledge is obsolete after a few years out of the game.) Not to mention learning how to use and deploy in CI all the static and dynamic analysis tools you need to keep your C++ code safe(-ish).

> NGINX and Linux is written in C

nginx is better than most but had a serious memory safety vulnerability reported as recently as 2018: http://mailman.nginx.org/pipermail/nginx-announce/2018/00022.... The Linux kernel, of course, has lots.

> If the situation was so dire, why isn’t every computer in the world compromised right this second?

Exploitation mitigations, patching, and ecosystem effects. But the situation is pretty dire.


> I know that highly capable organizations, e.g. the Chrome and Firefox teams, do use state-of-the-art tools and practices in their browsers and get similar results to the Microsoft 70% number.

Unfortunately, the threads grown too long and it’s starting to get difficult tracking referenced and arguments. The paper “Have things changed now? An empirical study of bug characteristics in modern open source software” specifically studies Firefox and finds no where near the 70% number (18%).


You're citing a paper from 2006. I'm not even going to read it.

As a former Mozilla distinguished engineer (left Mozilla in 2016), I assure you memory safety bugs are the majority of exploitable Firefox security bugs.


You mean like Linux?

https://www.youtube.com/playlist?list=PLbzoR-pLrL6rF8E5yyknJ...

Plenty of tasty material how Linux ecosystem has benefited from the improvements made by the research community.


If I could eliminate 17% of the bugs in a future code base by choosing to use a different language, I would need a strong reason not to.


The strong reason is given in the linked article. C is better supported, more stable, has more developers, and is better understood. Meanwhile many of these bugs could be caught by testing, which you should be doing anyway. You can also survive the effects of the bugs with other tools like stackguard with extremely low overhead, while still having the vast benefits of C described in the original article.


We have been hearing that matra for 40 years now, thankfully companies are finally getting wiser.


Even if it were 17%, that ignores impact. Impact of a bug in C is very often, at best, taking the entire service down, and at worse gaining full RCE.

I also think that a percentage is a weak indicator in general. With bug bounties we're seeing a massive influx of vulnerability reports for exposed APIs - this is almost always XSS and CSRF. I am not discounting the impact of those vulns, only saying that percentages are very market driven.


>I think the effects of memory bugs on security are often overstated.

How often?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: