Once I've gotten a better grasp on C, I plan on moving up to a language with a few more conveniences for day to day programming. It's between D, Rust, and C++. I feel like the safe bet is C++, but since I don't plan on getting hired to do it, Rust and D are also in contention.
I like the idea of using D with a garbage collector and having something like GO, and then turning it off when I want to do something more performance oriented.
How is the performance of D compared to other garbage collected languages when the garbage collector is turned on?
The GC could be improved by generating extra code when pointer accesses are made, so the GC can keep track of writes to GC allocated memory. This makes the GC faster and more effective, at the expense of slower generated code.
It is a worthwhile tradeoff for fully GC languages like Java. But it isn't worthwhile for D, which uses GC here and there and a lot of non-GC pointers and allocations. D trades things off in the other direction - faster runtime code generation, at the cost of a slower GC.
The more pointers you need to scan, the slower it becomes, and it'll stop all threads
It's fine for a compiler or a cli tool, but it's very bad for anything that runs forever and needs to scale (server and games, specially networked games where the world and number of entities is not fixed) you then need to think about managing your memory and working around the GC a lot; then you think about yourself, why use GC at all..
D needs invest more into non GC stuff and making the GC trully optional, make it possible to link the GC away if you don't need it, make the runtime less dependent on GC (ability to statically make GC usages assert such as 'new', rework associative array, and cleanup the runtime from requiring GC, thread, socket etc etc etc)
I use D everyday, i know how to deal with that kind of issue, but new people don't, and they might be turned off the day they need to workaround GC issues
> D, which uses GC here and there and a lot of non-GC pointers and allocations
Won't this depend on the programming style used? If the programmer makes heavy use of functional programming, or for that matter Java-style programming, wouldn't that result in a lot of object churn through the GC?
Java does not have stack allocated objects like struct instances. They get allocated via the GC (sometimes the Java optimizer can optimize these to be stack allocated).
Personally, I make very heavy use of stack allocated objects, even for temporary buffers. Stack allocated objects cost nothing to allocate, nothing to free, and (being on the stack) are already in the hot memory cache.
Functional code as usually written in D doesn't have this problem because people are aware of this - most compositions are extremely lazy, allocations are trivial to group together because of allocators in the stdlib
To make it effective, you'd only want it to apply to GC pointers. But the type system does not distinguish between GC pointers and other pointers, making such a mode a rather complex implementation problem.
If you have a struct on the stack, and call member functions, you've got pointers. For an array on the stack, taking a slice of it or looping through it with pointers, you've got pointers. For a variable on the stack, passing it by ref to another function, there are pointers.
The idea of having a language that can work with or without a garbage collector in different part of a code seems completely impossible to manage to me. How can this even work in practice ? Do you have twice the stdlib ? Twice the libraries ?
I'm pretty sure the D guys figured it out, but i'm really curious to see what's the end result in the real world, once it reaches the dirty hands of end-product developers :))
In extreme cases, this can become uneasy and then you need a mental model of what is traceable and what GC "roots" to add. If it's not GC-reachable it gets collected.
But in most cases, most people will likely just need custom allocators, or just malloc. You can generate arbitrarily less garbage in D progressively.
Mixed allocation strategies is still nice because whatever is in GC memory doesn't need an owner (it has a global owner). Nowadays, the D GC is really unlikely to be a problem whatever your performance requirements. It isn't a problem for corporate users, as you can see on the parent link. Besides the GC performance increased in the last few years.
> The idea of having a language that can work with or without a garbage collector in different part of a code ... How can this even work in practice ?
"Pluggable" GC-based arenas could make it work quite well. People routinely use ECS (entity-component systems) frameworks in GC-less languages like Rust that could mesh quite well with GC.
You do need some extra glue to harmonize the GC-based and GC-less paradigms (such as adding new "GC roots" for any GC'd objects that are being kept "alive" via links from non-GC ones, and demoting them when the references are dropped) but it's quite doable. Rust will probably get facilities for this once "local allocators" are added to the language in a reasonably stable form.
How does that work? As I understand it the D GC scans the stack and the GC heap memory for pointers (or values looking like pointers) to determine which objects are reachable. But can the GC scan memory allocated by malloc? How does it know whether a value allocated with malloc has since been freed?
The GC is precise where it matters but has no interest in memory you allocate yourself (exceptions to this rule exist but they're fairly obvious). You don't need to worry about it nearly always.
So you don't want to store pointers to GC objects in memory allocated yourself, or it may get freed if there are no other pointers? Seems like quite the footgun.
You do have to be legitimately careful with that. I've tripped myself before while sending objects through a self-pipe. The D GC only runs upon a function call, so with a bit of care you can defer runs, but here I allocated a new object while another one was still in the kernel's pipe buffer... and the GC assumed it was dead since it can't see that buffer and reused its memory for the new object. Took me a while to realize what was going on.
But on the other hand, I write a lot of D code and often go back to that same story because it isn't terribly common anyway and easy to manage when you know how. Just yeah if you are mixing techniques without knowing the potential issue you can have trouble.
Right, but if you do that, is there any advantage over just allocating that memory directly from the GC instead of calling malloc and then registering the memory?
Rust remains one of the most loved and least used languages in the SO surveys IIRC - ignoring the meme-aspect of that trend - the job market is still swamped by C++ shops.
Realistically programming languages take time to adopt culturally but they aren't that difficult
I just got hired for more than my parents earn off the back of an email to the right people and I'm 20 - D is small but we have a lot of money per capita as far as I can tell.
Oh that is more than enough, thank you! What I am curious about is if I could get a job there without having education. Here in Eastern Europe they still want a CS degree in many places, showing projects, that are on, say, GitHub is not enough. You simply do not even get to the interviewing part.
Proving you are worth talking to is the hard bit, and even then the recruiters don't know waffle from insight. Its a fact of life, I think: We are the people who really care about technology, the people we work for usually don't.
If you are a student we (D) regularly fund students to work on projects for us.
There's more money than you'd think in small communities - don't badger people, but don't be afraid to ask.
I find coding fun, and I have been coding since I was 10. I did not go to university because I had family problems, and financial problems, along with mental health problems (depression, anxiety, panic attacks and the like). They usually do not care about this though.
I wish they would at least check out my projects, and let me talk about them or something. I would probably work harder and better than some of their employees. I am currently working on a game and I do not sleep for 2-3 days. I am pretty dedicated when I am interested.
C++ is much closer to something like OCaml than to C or Java.
The "complexity" is not accidental, and it only seems complex because people are not used to languages with static typing systems. C++ is not complex if you mastered something like the Haskell type system.
(C and Java aren't really statically typed; they rely on run-time typing for polymorphism.)
C++ is not complex? The C++ complexity is not accidental? C++ is like Haskell or OCaml?
Dude, C++ can still compile C code, right? And how are C++ templates anything like Haskell or OCaml? Aren't they just some fancy text-based code generator thing, at the end of the day?
I'm really curious about this as everything I've seen so far contradicts your perspective.
Or are you talking just about modern C++ (post C++ 11)? That doesn't work as C++ of all kinds is out there in the wild and you'll run into it, at least from external dependencies.
> C++ is not complex? The C++ complexity is not accidental? C++ is like Haskell or OCaml?
C++ doesn't compete with Java or Go, C++ competes in the "polymorphism via static typing" niche along with Haskell, Ocaml, etc.
If you look at what Haskell at al. need to do to get static polymorphism right then C++ doesn't look at all that complex or strange.
> Dude, C++ can still compile C code, right?
No, wrong.
> And how are C++ templates anything like Haskell or OCaml?
C++ templates are a purely functional Lisp-like DSL.
> Aren't they just some fancy text-based code generator thing, at the end of the day?
So is Haskell.
> I'm really curious about this as everything I've seen so far contradicts your perspective.
You just don't know C++. People think they do because they learned a bit of C back in college, but they don't.
> Or are you talking just about modern C++ (post C++ 11)?
C++11 was a decade ago; hardly "modern".
> That doesn't work as C++ of all kinds is out there in the wild and you'll run into it, at least from external dependencies.
There's lots of legacy crap out there in the wild, including legacy COBOL and legacy Haskell. I fail to see your point; "modern" C++ isn't some wild departure from the original idea of C++. Ever since STL it has been in a pretty specific and well-defined trajectory.
The performance is going to be exactly the same for almost all code. D is spectacularly good at turning performance up to 11 - inline assembly is a first class thing in D.
D also makes generating code at compile time extremely easy which is a performance win at the expense of some compile time (no free lunch) but once again you slide the scale easily as the code the generates it is the same as at runtime.
There are two real backends one can use for these kinds of things, or one for Rust (LLVM) and friends.
Actually, there are 3 D back ends. There's the original Digital Mars C++ backend for DMD, the Gnu Compiler Collection backend for GDC, and the LLVM compiler backend for LDC.
Each has their strengths and weaknesses, and some users even switch back and forth depending on what they need at the moment.
I was keeping it simple as the DM backend cannot compile (say) Rust whereas LLVM (and sort of GCC) has frontends for both so you can make the comparison.
I myself am not in the "move to ldc" camp, I like having dmd available for speed.
Take a look at V. It's like the love child of Rust and Go. It's extremely fast (both compiling and executing), has simple syntax with "one way to do things", has sum types and option/results with enforced error checking, and it even has a REPL.
Oh not to mention it has C interoperability and a native cross-platform GUI library.
There are a few more choices in this space--someone mentioned Zig; there's also Nim, OCaml, Go, and even a super-old language that has a modern and well-maintained variant: Object Pascal. Check out https://castle-engine.io/modern_pascal_introduction.html
Judging from the Github links posted, this list looks fairly out of date. Many of these organizations no longer exist, switched away from using D and heck, one of them isn't an organization but just a guy's solo project.
I will get round to updating at some point, but honestly I don't think anyone in the community cares all that much. D has made it this far basically without paid staff until extremely recently i.e. we don't have a tech company as a sugar daddy.
People can be very generous in their donations, but consider that Google probably spend 10M a year on Go - of D and Go, regardless of which one you prefer, which one makes you think.
So what's the five or ten year story here? It's hard to look at D as a serious systems language option when it seems like the handful of people who are actually invested in its development have no interest in expanding the community and ecosystem. With so few people involved, there's too much risk that the capricious desires of a single developer or small faction will make an unexpected change or end support.
D has literally just gained 3 new staff paid for by the community (including myself).
And a single developer can't change the language. There are millions and millions of lines of D in production, and we have testcases and bug tracking going back a decade or two now - you can't break stuff by yourself.
Cool to see ArabiaWeather using D, especially with a link to the talk. For someone from that region (the Levant) I rarely see any organizations in the Arab world discussing tech, simply because they usually outsource most of the work to nearby tech hubs like India.
At this point I'm not sure why you would used Dlang when you can use C#. Maybe if your doing really close to the metal stuff. But not sure why for anything else.
I'm obviously biased as I work for the D foundation, but I tried C# for a while - it's was fairly nice to use but the lack of const-correctness and purity where I wanted it made me badly miss D.
D is not a simple language, but the solutions it chooses are usually very simple - for example, D has unit testing in the language, this is maybe a day's work in most compilers and yet not many others do it.
Built in unit testing sounds like a dumb feature, but it really is transformative in how one writes code. It's so dang convenient to write unit tests, it's hard to justify not doing it.
It's the same with the builtin documentation generator.
Where it really shines is with templates. I can write a unit test inside a templated class/struct and it gets instantiated along side it, so I know my tests pass for every template instance that I use.
The documentation generator has recently acquired the ability to recognize Markdown, making it even easier. I was initially skeptical of Markdown, but after using it here and there became a fan.
C# is a pretty good language and for some tasks, probably a better choice. But D is very versatile, so once you get into it, you can expand your use into all kinds of different areas. It can do your regular application or line of business support code reasonably well, but it can then also expand out to new platforms, new paradigms, and tie together niches under one language.
As an individual programmer, it is pretty cool being able to write web apps and homebrew video games with the same language. For a company, you probably aren't going to be that varied, but it can still be good to dive into some special optimizations as needed.
Many of the organizations on this list use D for some combination of its flexibility. They have two areas of interest that need to co-exist and D lets that happen with minimal friction.
Not only, there are others to chose from, and regarding .NET Native is reliable enough to build an whole business on top of it, Windows Store, upcoming Windows 10X, and the late Windows Phone definitely did not fail due to it.
Don't get me wrong I'm sure native C# is plenty fast and within acceptable speed range. It it were me, native code would be used at a last resort strategy.
But perusing https://benchmarksgame-team.pages.debian.net/ is still seems C# takes much more memory? It's strange as such benchmark don't expose much change between .net and .net aot
"Applications that have small amounts of code will likely not experience a significant improvement from enabling ReadyToRun, as the .NET runtime libraries have already been precompiled with ReadyToRun."
You can do that now. Technically the binary contains an embedded runtime, but is that really much different?
Full AOT is possibly coming with the next version of .NET, mainly aimed at non-JIT-capable targets, such as WASM and iPhone. Currently they only work with Mono due to lack of AOT in .NET 5.
As I mentioned it is mostly just packing a runtime with the binary.
But for most use cases, does it matter? It won't work well for microcontrollers, but for most other things a 60MB binary is not a big deal. It takes under 3 seconds to download on an average internet connection.
The main reason I wouldn't use it for system programming is lack of good Linux libraries.
It's an insignificant cost. Nobody cares about it at any scale except embedded development (or occasionally due to artificial restrictions created by app stores).
The money saved by shaving a few MB off your binary is going to always be like 0.00001% of your costs, and even if you're at such a massive scale where that could justify developer investment, there's going to be lower hanging fruit to worry about.
If you're at such a level of optimisation that it's correct to worry about deploying a 60MB binary, you must be doing exceptionally well. Most places (including Amazon and Google) are using 200MB+ docker images and serving poorly optimised image files.
And if it _really_ bothers you, you can always install the runtime separately for a sub-1MB binary, or use trimming for a sub-10MB binary.
and it tells a lot about the mentality of the developers/company
60mb, now CDN server need at least 60mb of storage to cache your file
compare this to a developer that care and will produce the same with a native language for 6mb, that's 10x less the space needed
it'll cost CDN 10x less money to store and serve since they'll need 10x less storage and will require less bandwidth too
for you single dev, you apparently don't care, but it then hurts everyone, including global warming issue
and at the end, it'll hurt me, now my 128gb SSD isn't enough to store all the electron shit, i need upgrade to install new update
cpu and memory also has to take longer to load and cache all your program
it is the reason why software sucks despite hardware becoming better, nobody care anymore
and it all start at the developper computer, if you don't care, nothing will improve, and every bits matter, either you like it or not, that's how computers work
and idk why there is this culture in "IT" of not wanting to acknoledge how computers work, and how bloat affects performance, efficiency, cost and global warming
i came to the conclusion that clueless people managed to get into high positions, and wanted to secure their position by hiring the same kind of people, the people who don't give a damn about performance problems
they'll just waste (again the word waste) more funds from the company
after all, if nobody know how computer works, they'll just agree and sign the big invoice without asking themselves, wait, don't we pay too much to deliver this web page, do we really it to be 260mb?
Indeed they will. Gaming compiler benchmarks has been going on since the early 1980's.
Datalight C was the first C compiler for DOS to do data flow optimizations. Since the benchmarks of the time simply did some operations and ignored the result, the data flow optimizations would delete the benchmark code.
I was accused of gaming and cheating on the benchmarks. After they published, of course. Sigh. It wasn't long before other compilers did DFA, and the benchmarks (of course) were changed.
To get access to a wealth of libraries, support, documentation and tools, that D is sorely lacking in. To be able to use a state of the art garbage collector that provides some degree of guarantee about runtime performance as opposed to D's very poor garbage collector that provides basically no guarantees and is known to sometimes block an application for upwards of dozens of seconds at a time [1].
And most importantly, to be able to use a platform that has pretty strong guarantees about backwards compatibility and that is able to run code from 10 years ago without modification, as opposed to D where even a minor version update can end up breaking D's own standard library.
Certainly D has a niche of users who are very passionate about it and for those guys, all the power to you guys and don't bother listening to me... but for most of us, well after 20 years it really hasn't established itself as a reliable language to use beyond that niche. It's no longer 2005, a time when C++ was stagnating and there was a lot of opportunity for a new programming language to sit some where between Java and C. Instead of being released with a roadmap, some promising libraries and frameworks that establish a clear use case where D could shine, and a license that allowed broader community involvement in the implementation of the language and its standard library, it was released as a proprietary compiler that ended up quickly resulting a very fragmented community with a lot of infighting, many missteps and mismanagement made on the part of D's leadership the result of which is that D lost on the order of 5-10 years worth of progress for nothing.
It's now 2021, C++ has advanced significantly since those days, there's many new languages such as Go and Rust that have taken the steam out of D and are progressing far more rapidly... and D, which at one point in time was actually innovating on language design, is now constantly trying to play catch up with other languages and implementing half-assed features. The language is often referred to as a kitchen sink of functionality and I kid you not when I say D is literally in the process of working on adding a quasi-borrow checker to the language just so it can keep up with Rust.
> To get access to a wealth of libraries, support, documentation and tools, that D is sorely lacking in.
Libraries have never been an issue for me. You get all C libraries trivially, so to the extent that C has libraries, so does D. I also embed R inside my D programs, so I get access to every R library. And if you want to object that "R is slow", I'm not talking about only calling R, I'm also talking about calling R bindings to C++ libraries without having to involve R. I suppose there would be some marginal benefits to having everything written in D, but many people seem more than happy to use languages like Python that access functions written in other languages.
> And most importantly, to be able to use a platform that has pretty strong guarantees about backwards compatibility and that is able to run code from 10 years ago without modification, as opposed to D where even a minor version update can end up breaking D's own standard library.
Breaking changes are not that frequent. You're right, though, if you have a zero-tolerance policy with respect to breaking changes, you can't use D or any other language that has breaking changes.
> it was released as a proprietary compiler that ended up quickly resulting a very fragmented community with a lot of infighting, many missteps and mismanagement made on the part of D's leadership the result of which is that D lost on the order of 5-10 years worth of progress for nothing.
You're referring to things that happened before the pyramids of Egypt were built. Why would someone that has no interest in the language spend time writing comments about ancient, ancient history on Hacker News?
This comment is well written and pretty informative. The fact that it's downvoted to gray text shows how much of an over opinionated echo chamber this site has become.
I think it misses the fact that D enables patterns that are fundamentally not possible in other languages and represents a slightly odd fusion of two extremely intelligent people with almost totally different approaches to software. The result of that, D, let's you write C++ but also Pythonic code - C++ has definitely caught up, but it still lags massively behind in fairly fundamental things that D stakes it claims on.
For example: D has Ada-style contracts - these are extremely simple, and so blindingly obvious it's criminal they're so uncommon.
I also don't like the tone of the last paragraph. Should we be trying new things or not? Named arguments have just been approved, we're hashing out string interpolation as I write this, memory safety is an ambition but we can't rush it etc.
These sort of doom predictions have existed since the beginning of D, it hasn't stopped getting users. It's basically FUD. If you look at the thread link, basically everyone is happy using D so maybe contrast this well written criticism with reality...
> To get access to a wealth of libraries, support, documentation and tools, that D is sorely lacking in.
That's certainly a big factor of the network effect. Although, I have to say that I'd probably rate this higher for almost any other popular language than for C# and Java, as with the latter it's quite often that your corporate overlords don't just let you use any library willy-nilly.
Then again, if you're in such a situation, it's quite likely that you don't have a free choice of your language, either. (Unless in weird corner case situations where you could deploy a binary but not depend on a current C#/Java runtime, and your company wouldn't shell out for an AOT C#/Java compiler just for this lone project.)
I do miss Design-by-Contract in every language that doesn't have it, though. And using D might be slightly more likely than using Eiffel…
An IDE that works amazingly well, autocompletes everything and can refactor everything across many files (D will never get there because it makes heavy use of code generation).
A much more stable and battletested compiler and a surrounding ecosystem. Wealth of libraries for each task, from game to web development.
A lot of work opportunities in C#, not as many in D.
Because of popularity, it's easier to attract other developers to your project.
At best it is a Java clone that tries to eat NodeJS cake at the same time, but is then eaten alive by GOLANG
C# is plagued with enterprise bullshit (OOP, dependency injection, factory bullshit, one class per file bullshit, etc etc etc)
Worse, it only has a JIT, no AOT native compilation (there is one compiler but AFAIK Microsoft is focusing on JIT) so it can't do system/native development at all
This is a pretty silly point to make (D also has the other things). D does this in the language and is specified as such - not at runtime, not with compiler extensions, but with actual D code.
And referring to web benchmarks is a poor measure of the language because they are about web framework performance. Weka.IO make apparently the world's fastest file system in D, so there's that.
you can't compare C# AOT solutions with what D or any other native language has to offer
C# as a language is not working great in AOT environment due the managed and runtime features of the language
that is the reason they are unable to strip more stuff from the executable, it's a mess
so having AOT compilation doesn't mean it's automatically good, when it's bad experience, you have to say it outloud or nothing will improve
everyone can add AOT compilation as bullet points just to pretend the language is capable, because it's obviously not, prove me wrong if you can, but i doubt you can, i have tested, and results are not good, at all
Sure it matters, where are the iOS, PS5, XBox, Switch applications and games written in high performance D?
Unity alone accounts for more than 50% of Switch games being sold, a console that already outperformed DS sales.
I bet Unity IL2CPP not being as good as D isn't that much of a concern to their management, given the ongoing porting efforts from C++ into HPC# and Burst compiler.
C# has metaprogramming through generics, the Roslyn compiler API and source generators.
C# supports limited manual memory management through unsafe and manual memory allocation methods, but this is primarily aimed at C/C++ interop. Newer performance oriented features like stackalloc and Span/Memory give most of the benefits of manual memory management without the dangers.
Performance of .NET on Linux is on average better than Java and Go.
Mono has an AOT implementation and Microsoft are expected to release one soon.
C# normal coding style may not be to your preference, but it's become a lot less "enterprisey" in recent years. XML is nowhere to be seen any more, for example.
But yes indeed it's not a very good language if you're wanting to directly manipulate registersand hardware addresses.
.net performance is good once fully warmed up JIT, wich never happen for most usecases, only in raw and longrunning benchmarks
so no, real performance scenarios, .net don't do better than GO wich is AOT compiled
mono performance is very bad actually
generics isn't playing on same level as metaprogramming in D, not even close to templates in C++, even Rust do better here
using roslyn compiler API is nice, but again, it is not the same as using the language itself already, the experience is different, it is on other hand much better than what GO has to offer