Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

An OS in C++?

But Linus said...

http://harmful.cat-v.org/software/c++/linus



As other's have stated, that is just Linus's opinion, which he is entitled to. It's also even understandable given his position managing all of the Linux kernel and git development work, and the large number of people who contribute (or try to anyways...) If you don't want people to blow their damn leg off, don't give them a shotgun (http://programmers.stackexchange.com/questions/92126/what-di...).

That said, this absolutely doesn't mean that you CAN'T do systems level programming in C++. I've actually done real RTOS and motor control work on a small embedded platform that went in a robot using C++! This is in an actual shipping product too, not just a hobby project...

As Linus said, it's definitely easier to come up with something inefficient in C++, and you do have to limit yourself to a sane set of features. That said, I think some of the things C++ brings to the table can make development a lot easier without sacrificing performance as long as you have a disciplined team and sane coding guidelines. But what makes sense for a personal project or a small team may not make sense for a larger project, and while I think Linus is justified in his opinion, you definitely shouldn't take it to mean that you CAN'T do systems programming in C++.


Totally agreed with your comment until that:

> as long as you have a disciplined team and sane coding guidelines

Is it not possible at all anymore to just hire people actually understanding what they are doing? Or cargo cult is here to stay?

I am sorry - please don't take that as a personal attack, but explaining people that C++ can actually be used in OS development by mindlessly sticking to some rules doesn't seem a way to me... but rather a counter-argument to what you said yourself.


It's more that certain features can be way more expensive than they look.

e.g. say you're working on an embedded system, and you want a string, so you do:

    std::string s = "fnord";
At first it seems to work fine --- but you firmware image's RAM requirements have just gone up by 32kB, and a week later there's a crisis when adding another feature causes the system to stop linking because the RAM address space is full.

What happened is that std::string uses the heap to store the string data, so adding the line above caused the linker to pull in all the heap code and allocate a 32kB block of RAM to put the heap in. Because previously, the product wasn't using a heap: it was using static memory allocation throughout.

That example's contrived, but only a little. I've done the must-avoid-dynamic-memory-allocation dance many times in real life. (I've also discovered that printf() requires a raise() implementation on some platforms.)

A more realistic one is that embedded platforms typically have RTTI turned off, which means no exceptions, which means no throwing exceptions from constructors, which means two-phase construction throughout your program and you have to be really careful about which bits of the STL you use...


I do of course understand what you mean.

> At first it seems to work fine

It so happens that I have been working in OS- and low level areas for many years, and independently am also a pretty early adopter of C++. I have a reflex of routinely giving a quick glance to the link map, and am frequently dumping assembly for areas of code I have doubts in.

All this to say that reading 'it seems to work fine' in the context of OS development kind of provokes a skin reaction in me.


In your string example (and in RTOS/C++ programming in general) couldn't you just change the default allocator to not use heap memory ? Then continue using std::whatever ? of course you'd have to keep a very close eye on your memory pool, but wouldn't this be one way to solve the problem ?


Quite a lot of embedded architects would say that allocation should never happen at all. That rules out the STL and much modern C++ style.


No No No! Don't apologize, its a great question! And to be fair I'm absolutely against "cargo cult" programming, and my point was actually that when you are doing C++ in this type of environment, you can't always stick to a hard set of rules or the conventional wisdom. When I say guidelines, I mean just that, and not a rigorous law you MUST adhere to.

As an example: When I was working on a project for a very constrained embedded device, we needed to get some extra man-power on our team for a few sprints to help out with some functionality. One of the pieces of our system was a "debug console" I had written that allowed some interactivity with the system over a serial port. The new guy was a very sharp engineer, but he typically worked on higher level stuff than we were doing. He wanted to add some functionality to the debug console, and dutifully started writing stuff in using the C++ string handling libraries. And consequently he blew our stack budget, and we ended up very quickly rewriting part of it together.

Now, the point is he wasn't doing anything using some crazy STL functionality or Boost, and he was doing the "right" thing by handling strings using the Standard Library functionality. What we had to do for our system was actually bend the conventional knowledge ("Don't write a string handling library yourself"), because we knew exactly what we needed, and exactly what resources we had.

So perhaps I could have phrased my point better. When I say "have a disciplined team and sane coding guidelines", I don't mean a team that codes by the book, I mean a team that knows what it is doing, and knows when the rules are meant to be bent. In our case sane coding guidelines meant we did things that were against the conventional wisdom, but they were sane, because they were justified in our case based on our engineering analysis. We were certainly open to breaking / changing these guidelines but it had to be justified. (And in fact our 'guidelines' were less a set of rules about how you needed to do every last detail, and more of a set of project specific "design patterns" and a large set of lessons learned in a shared wiki page which described issues we had run in to, and justified certain design decisions that were made)

(Edit: Other examples included disabling RTTI, and completely disabling and disallowing the usage of C++ exceptions to write our own error handling. Against the common advice to use what the language gives you, but made sense for our application)

Again, no need to apologize! I could have made my point clearer, and I hope I did, but please feel free to follow up with me! I'm always looking for ways to improve :-)


Ha. You posted this while I was writing my answer, and I see you (very nearly) used the exact same pair of examples that I did. This is possibly a hint as to where the pain points are...

(Your post is better than mine, though.)


Haha, I was just about to reply to your comment with nearly the same thing! I think one day I may need to do a book on C++ for embedded folks. Chapter 1 will be "Please don't use std::string!".


Could hold true to the HLL guys as well. I seem to remember at least a couple of performance analyses of apps in a High-Level Language where string concatenation was killing performance.

Easy to do. Tough to always remember the impact of what's going on under the hood.


HLL's (at least those which have a VM) often try interesting techniques to combat this, since string handling is so often a performance problem (it's a performance problem because they often make it relatively painless to manipulate strings, at least to the point where it's not obvious you may be doing something really inefficient). It's interesting if you follow the development of a language while it's being developed, you can usually see a few of the techniques they've used come and go. Starting with simple string handling, then global shared copy-on-write strings, then ropes, possibly a fourth weird representation, and likely back to one of the prior, simpler models.

At least, that's what I hazily recall from the long, jumbled Perl 6 history, but that includes a few changes to the language and multiple VM's and multiple string handling regimes per VM, sometimes.


or iostreams. BTW I sure would lap the book up.


Thank you, I get your point. Re-reading it, and other comments here, I am getting reinforced in suspicion I already had for some time -- people don't do operating systems in C++ simply because they cannot hire big enough teams of C++ developers skillful enough to code in an OS environment.


What kind of C++ are you writing? That's the real question. My friends and I that work on bare-metal talk about "embedded C++". Yes, it is C++, and you can take advantage of some C++ features, but some things are not in the mix. Like, for instance, if you have 8K of physical SRAM and no virtual memory manager, new and dispose aren't going to do a lot for you.

C++ has it's place in OS development, but you need to know exactly what code is being generated, and you need to understand with excruciating precision how memory is being allocated.


That's Linus's opinion, yes. Not everyone has to share it.


It's a really annoying widespread opinion. I've been looking for people who really do write OSes in modern C++.


IncludeOS? [0]

From the CppCon 2016 presentation description:

"Early in the design process we made a hard choice; no C interfaces and no blocking POSIX calls. We’ve done everything from scratch with modern C++ 11/14 - Including device drivers and the complete network stack all the way through ethernet, IP and ARP, up to and including UDP, TCP and recently also an http / REST API framework. To achieve maximum efficiency we decided to do everything event based and async, so there's plenty of opportunities to use lambdas and delegates." [1]

[0] http://www.includeos.org/

[1] https://cppcon2016.sched.org/event/d30a43dae4a490dce81a3dfc6...


Is that really a "hard" choice? Aren't blocking POSIX calls usually just a case where the kernel blocks for you on what is essentially an asynchronous operation anyway?

Maybe it's meant to be hard in the "we decided we weren't going to be POSIX compliant which has implications" send and not the hard to implement sense?


Nice :) I didn't know this project. I'll definitely read about this project.


I'm Alfred from IncludeOS. We're open source, so check us out on GitHub if you want to try IncludeOS or would like to participate: https://github.com/hioa-cs/IncludeOS

We also have a chat if you have any questions: https://gitter.im/hioa-cs/IncludeOS


Hello.

I'm the author of the mentioned project. It's indeed a very annoying opinion. Although there are indeed some difficulties in writing an operating system and quite some runtime support to implement, I would say that it's worth it only to have a more powerful language. I'd rather use a language that I really like rather than be forced to use one I don't really care about.

I cannot say that C++ is the best language to develop an operating system, but I would definitely say that it's possible if you really know the language. And you don't have to use the complete language (I didn't enable exceptions nor RTTI in my OS).


I don't write OSes in modern C++ at the moment (most of my day job is embedded Linux nowadays, go figure...) but I've seen a lot of sane, OS-level C++ code. Even C++ code that I have bad memories about (uh, Symbian) is partly justifiable.

The only real beef I have with C++ is its complexity. As I did less and less high-level programming, I forgot about many of C++'s pitfalls and nowadays I'm always tiptoeing when I have to write C++ code. I think a lot of it is unwarranted, or at least not needed when you're writing system-level code (but my impression may be distorted by the fact that I'm used to embedded systems and high-reliability applications; my sight may be narrow here).

Edit -- oh, by the way: it's worth pointing out, in the context of a thread that mentions Torvalds' opinion on the matter, that the whole thing was written a while ago.

The Internet endlessly recycles some of these arguments, e.g. STL still gets a lot of criticism that hasn't been true in a while. Yes, STL was terrible, terrible fifteen years ago, but that's, like, 50 years in computer years.


The STL might be the best programming success story. It used to be awful, but now I rank it among the best standard libraries of any language.


that's because Stepanov named and shamed standard library and compilers implementors whose standard library wasn't compliant/fast enough.


BeOS, Symbian, L4, Genode, Mac OS X drivers, and big parts of Windows are all examples of C++ use in OS development.


BeOS is the reason I learned C++. I don't recall exactly, but either I didn't understand how or there was no other option but C++ if one wanted to hit the BeOS API. Pity Be Inc. got shafted so bad by MS monopoly abuse.

They had zero vendors willing to put their OS on a box and resell it because MS threatened the vendors with pulling their right to sell Windows if they did. This wasn't particular to Be, but to any other OS.


Yeah, still have my Be CDs stored somewhere.

However regarding Microsoft, I actually think that vendors were as guilty as Microsoft.

They could have chosen not to take Microsoft's discount and try to sell alternative OSes, even if it meant having to face a few challenges.

The one that takes is as guilty as the one that gives.


I would have to really dig for the emails the company sent out (I should have them archived someplace), but I thought the issue wasn't discounts, but a revoking of the license to sell Windows if the OEMs didn't comply.

I did a quick search and came up with Quora answer given here: https://www.quora.com/Why-was-the-BeOS-dropped

but this doesn't jive with my memory. I will have to see if I can find an old email or maybe dig further on the net. I certainly don’t want to be rewriting history.


Even if it was revoking the license, vendors had an option, and they have chosen the easy one out.

I remember quite a few small shops trying to put a fight selling other kinds of computers, they might have lost in the end, but they tried to walk a different path.


I disagree: there is a lot of competition between PC builders so this ´discount' isn't really optional and Microsoft should have been heavily punished, for offering this discount.


Funny this thing to blame just the side that gives but not the one that accepts.


I think only L4/Fiasco.OC and L4ka::Pistachio are C++; other L4-style kernels (seL4, OKL4, among others), including L4 proper, are written in C.


And there was Chorus back in the 90's.


Genode (essentially a microkernel abstraction layer, and some useful userland, a full Qt implementation and a reasonable UNIX-y layer) is entirely C++, with some C++ microkernels available for it (Fiasco.OC and Nova being the two I'm playing with).


I'm the maintainer of a proprietary C++14 RTOS for work. What do you want to know?


I'm not OP, but I'm interested in the fact that you specified C++14. Are there any features specific to C++14 that you use? I have about two university classes worth of experience with low-level software, but I can really only see the deprecation attribute and binary literals being useful? All of the other language additions (added support for type deduction, templates, lambdas, and the mixing of all three to various degrees) seem like they would take up too many resources to be useful.


We switched from C++11 to C++14 mainly for constexpr (it existed in 11, but had a lot of restrictions that limited it's usefulness).

The advanced template techniques (and to a degree, I'm throwing type deduction in there too), when treated skeptically do lead to more efficient code. It's easy to go off the deep end though, which is why I said "when treated skeptically". As for lambdas, we have an entirely asynchronous OS, so they're really nice for callback glue.

EDIT: Binary literals actually come up way less than you'd think, even for deeply embedded (I think the smallest thing we ship on currently is 16KB of RAM). Everyone here knows hex like the back of their hand.


Yes, but his opinion counts more than some John Doe's as Linus supports his case with solid arguments.


'It's made more horrible by the fact that a lot of substandard programmers use it, to the point where it's much much easier to generate total and utter crap with it. Quite frankly, even if the choice of C were to do nothing but keep the C++ programmers out, that in itself would be a huge reason to use C'

Solid argument that.

His two other arguments:

- infinite amounts of pain when they don't work (and anybody who tells me that STL and especially Boost are stable and portable is just so full of BS that it's not even funny)

Considering that the linux kernel is strongly tied to the specific C dialect supported by GCC and is not portable at all, this argument is bullshit.

- inefficient abstracted programming models where two years down the road you notice that some abstraction wasn't very efficient, but now all your code depends on all the nice object models around it, and you cannot fix it without rewriting your app.

You can abstract yourself in a corner in C as well. In C++ you are more likely to pay less for abstractions.

Sorry, but I take it personally when me and my colleagues are called substandard programmers.


>>Sorry, but I take it personally when me and my colleagues are called substandard programmers.

I don't know what kind of programming you do, but he is talking mainly about kernel programming. He is mainly criticizing the people who want to bring C++ in the kernel world. Anyone is welcome to prove him wrong.

But I think, C++ is more suited for applications programming, where efficiency is not a prime concern and abstraction-costs are justified.

Even in such cases, (e.g. git, which is an application program) an excellent programmer like Linus can be very well productive with C and does not need the (sloppy/non-sloppy) abstractions provided by C++.

>>You can abstract yourself in a corner in C as well. In C++ you are more likely to pay less for abstractions.

Agreed. But the point you seem to be missing is that C doesn't force any abstractions on you. The STL/Boost abstractions are much more inefficient.

Linus talks about these inefficient abstractions and that's a solid argument because C++ abstractions come with various sorts of hidden costs. (e.g. even name mangling can be a significant cost factor in Kernel.)

>>Considering that the linux kernel is strongly tied to the specific C dialect supported by GCC and is not portable at all, this argument is bullshit.

The issue of portability to different compilers is not an important concern for kernel programmers. If they find a particular tool (e.g. gcc with some C dialect) perfect for their purpose, why should they bother with other tools?

Remember, for Linux kernel programmers gcc is just a tool to produce their product, the kernel.

edit: added point about portability


>I don't know what kind of programming you do, but he is talking mainly about kernel programming.

This specific rant was about using C++ in git. BTW I work in what could be called soft realtime systems.

> But I think, C++ is more suited for applications programming, where efficiency is not a prime concern and abstraction-costs are justified.

If efficiency is not a prime concern, using C++ is hardly justified.

> [...] the point you seem to be missing is that C doesn't force any abstractions on you.

Nor does C++.

edit:

> Even in such cases, (e.g. git, which is an application program) an excellent programmer like Linus can be very well productive with C and does not need the (sloppy/non-sloppy) abstractions provided by C++.

and that's perfectly fine, if one feels more productive in a certain language, more power to him, but please let's not spread FUD.


>>If efficiency is not a prime concern, using C++ is hardly justified.

Well, yes and no. Yes, I agree with you as C++ gives you more control than most other high level languages out there, e.g. more control over how you manage your memory. I can hardly imagine someone writing kernel in managed languages, like, Java.

No, I don't agree with you because sometimes, when very low-level aspects of machine also become prime efficiency concern, using C++ is hardly justified. See my point about C++ not forcing any abstractions on you, given at the end.

It reminds me what someone once half-jokingly said about C and assembly: "C gives you all the power of assembly language with the same ease of use".

In the kernel world, it becomes a joke, as the level of abstraction provided by C is very high as compared to the one provided by assembly language (e.g. mainly due to struct, union and cleaner subroutine syntax) and the cost of this abstraction is extremely low.

The benefits of using C are tremendous: e.g. code portability and readability.

>>Nor does C++.

Yes, but when you don't use any non-c abstraction provided by C then it reduces, almost entirely, to C (barring templates).

Templates are extremely good mechanism to provide abstraction (especially as compared to inheritance) but their cost (e.g. cost in terms of code bloat and in terms of the cognitive load if one actually wants to dig deeper and see/tweak the generated code to investigate/address some performance issues) seems prohibitive at least in the performance sensitive kernel programming.

The kernel hackers have found a neat-but-not-so-neat way around it: by using C macros. Macros are in fact C's templates. I am not saying macros lead to cleaner code and so on but when you compare them to C++ templates, their cost-benefit equation in the kernel programming world seems justifiable.

>>and that's perfectly fine, if one feels more productive in a certain language, more power to him, but please let's not spread FUD.

I agree with you whole heartedly about one's choice of language. I personally would have chosen C++ and even Python over C to implement an application like git or its parts.

Not to play advocate for Linus here (he doesn't need a half-witted advocate like me), but he seemed to be spreading FUD about C++ because, supposing C++ were allowed, he seems to feel that many people will start using its abstractions without being aware of their costs. It's very easy to get tempted to use available abstractions and if the abstractions start leaking (as he pointed out) then fixing the code that relied on those abstractions becomes a difficult issue.


> The STL/Boost abstractions are much more inefficient.

Well, the STL/Boost libraries are designed for certain goals. If you have different goals, you should use something different. But in any case, which specific abstractions are you talking? Which scenario are you optimizing for and what common case, worst case perf numbers are you looking to hit?

>. (e.g. even name mangling can be a significant cost factor in Kernel.)

What in the world are you talking about?


> Anyone is welcome to prove him wrong.

Apple and Microsoft already did.


Please provide some links.



> You can abstract yourself in a corner in C as well.

It's much, much harder to do than in C++. It's also much more obvious when it happens, and in collaboration others will spot it quickly.

His tone is harsh but I find it completely right. I prefer to use C at work exactly because the same kind of software would be an absolute nightmare to write in C++ (it is very low-level soft, extremely similar to kernel code).

C++ is good if you can enforce very strict guidelines and if every single programmer that contributes to the code is very good at C++. Those are pretty big if, especially if you work with a partially open source codebase.


As long as you avoid "virtual", I can't even think of any C++ features which would lead you towards worse performance characteristics than C.

Templates may bloat your binary size and increase compile times, but they're plenty fast.


There's also making unnecessary copies of things like strings and vectors passed to functions and exception overhead (in binary size at least)


Rvalue references and std::move have largely obsoleted the argument of copy overhead. It makes particularly a dramatic improvement in container efficiency. C++11/14 really is a different language than C++98.


> Considering that the linux kernel is strongly tied to the specific C dialect supported by GCC and is not portable at all, this argument is bullshit.

Linux has been ported to over two dozen architectures so it is extremely portable. Yes, it is tied to some GCC behaviors, but many operating systems are tied to compilers. Some extensions make the code identical across different archs, like __builtin_return_address.

Windows is tied to particular C features, which is why MSVC is resistant to C99. Plan9 was a special dialect and compiler of C.


The context of course was portability to different compilers, not architectures.


Oh gotcha... I interpreted Linus's comment about archs. Boost is supported on far less archs than the Linux kernel is:

http://www.boost.org/doc/libs/master/libs/context/doc/html/c...


Note that that's the supported architectures, i.e. those that are routinely tested. 99% of boost in architecture/machine independent and will work anywhere that has a standard compliant compiler.


Sure. Obviously all of this depends if the context is git or the kernel. But Linus (in context of the kernel) cares more about the 1% than the average person, for things like atomics, mutexes, etc.


I do believe that when programming at "machine level" your language needs to be as simple and straightforward as possible. It needs to obey you like a sword. Yes, you need some mastery, but you know that your every action is effective. No lost effort, no mind tricks, no unnecesary complexity.

I find that every abstraction level makes me more unhappy and somehow lost..

So yes, I do believe Linus knows something and for sure he's not trying to fool himself. When you really try to build something to actually be used, all those abstractions will bite you..


The problem with C++ is more about collaboration than with the language. It's harder to misuse C than C++. When you have a large group of people contributing, proper usage matters. Everything Linus rants about is improper usage.


On the other hand smart pointers (which I think didn't exist when linus wrote this?) and RAII make C++ harder to misuse than C.


I think it's more about the feature surface. C++ is a gigantic language while C is fairly limited. I actually like C++ when I'm the only one writing it, but my experience has always been that for complex use cases (large) c++ code bases tend to always evolve to a mess of complexity, even with expert programmers.

But yeah, smart pointers alone help a lot with writing tidy c++...


There was a time that C++ was considered a gigantic language. Just as there was a time that Common Lisp was considered so large that it collapsed under its own weight.

Many languages, including "simple" Java, are about as large as C++ right now ( https://channel9.msdn.com/Events/GoingNative/GoingNative-201... , around the one-hour and fifteen minute mark). To be honest, one thing that makes working in C++ relatively hard is the fact that C++'s standard library is significantly smaller than the competition. You have to do more yourself.


There is a difference between language complexity and library complexity. The Java language is very simple, but the Java standard libraries are a collosal, complex system.

C++ the language is extremely complex and riddled with pitfalls and minefields. Just try to pin down the formal definition of such a widely-used term as "rvalue". OTOH I find the C++ standard library to be reasonably straightforward and well-designed. The STL in particular is brilliant.


Brainfuck has a limited surface. That doesn't mean it isn't hard or error-prone.


But for a totally different reason. C++ is hard (to me, anyway) because it's gigantic and people tend to make it into a mess. BF is hard because it's intentionally obtuse.


Aren't large parts of Windows written in C++ ?


I'm pretty sure it is! ( I'm also pretty sure that for a subset of the HN crowd, you may have just proven Linus's point :-P )


I think even HN readers must accept that the Windows kernel is rock solid. Maybe even better than Linux - Windows even gracefully handles graphics drivers crashing and can restart them virtually seamlessly. Linux just panics.


Graphics (speaking of modern 3D graphics) on Linux sucks, because NVidia and Co do not really care about it. It has gotten better (e.g. the deep learning crowd is mostly on Linux using cuda), but is still a far-cry from the stability of other sub-systems.

So I would say it is mostly gaming. There are not many games on Linux (again, getting better, but it takes time), so gfx vendors do not allocate big resources to support it, so the developer-experience is suboptimal, the classic chicken-egg problem.


The kernel is written entirely in C. See various WRK releases you can find online.


Until Windows 8, which they introduced C++ support on the kernel and deemed C89 as good enough, with the way forward being C++.

Yes, the latest VC++ do support C99 library, because it is required by the C++ standard and the new MSVCRT.dll is actually written in C++ with extern "C" entry points.



Does this actually use classes and other features of C++? Scanning a few files on Github I see namespaces being used but not much else.

Very impressive either way - I was just curious how C++ was leveraged.


I'm using a few classes and some class hierarchy as well. I've reimplemented std::vector and std::string and a few other features from the STL and use them both in kernel space and user space (the STL is not standalone, need the glibc and I didn't want to port everything). I'm using quite some templates in the library part. I'm using auto from C++11 and a few constexpr functions. I have disabled exceptions and RTTI. I'm using RAII principle as much as possible (but this can be improved a lot still). I'm using references when I can remove pointers.

On the other hand, a lot of code is clearly very close to C. When you're doing some low-level things, parsing memory structures, paging, ... there are not a lot of features from C++ than can help. Moreover, there is a lot of code that could profit from some refactorings:P


I agree 100% with Linus, especially the boost comment. It was heavily used in a project I worked on and "boost" soon became a curse word.


I don't agree with Linus, but yes, boost is horrible.


I don't agree with you, boost is great.


Rather than continue with unsupported opinion, I'd rather make the specific point that Boost served well as incubator and proving ground for such constructs as shared_ptr and unique_ptr, which have been subsumed into the C++ standard as undeniably huge improvements. As a result, the unfortunate abortion auto_ptr has finally been able to be consigned to a well-deserved resting place in Hell.

Other parts of Boost have been considerably less impressive, and virtually nobody uses them. I struggled at length trying to get Boost::Parameter to work, with zero success. Boost::Format at least works, but ends up cumbersome, and does not approach the usability of {} formatting in Python.


You should not really judge a tool by the bad use the people make of it. Hammers are perfectly fine with nail but they are horrible at cutting bread


Linus now even uses Qt for its own hobby application.


s/its/his/ ?


Yeah, too late to edit.


Well, when St. Linus Torvalds said those words, (1) g++ was not as good as many other C++ compilers, (2) the world was still full of very bad examples of C++: if you were lucky, back in those days you could have found some projects using C++03 but the majority were stuck to C++98 -- or worse -- and, (3) he called out the holy principle that "at my place, I make the rules". I do not think that in 2016 there are good reasons not to use C++ for an operating system. Even without using the STL -- which would require custom allocators at that level -- incapsulation of data inside objects, inheritance, templates and namespaces alone are a reasons good enough for me to prefer C++ over C at any time nowadays. In 2016, that is more a cargo cult not to use C++ for operating systems implementation than anything else.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: