Hacker News new | past | comments | ask | show | jobs | submit login

I can't speak about the ISO as a whole for engineering fields, but the ISO standardization process has worked out horribly for the C++ community. Not only for the issues Tim Sweeney points out, but the entire C++ standardization process is defacto a closed-off and secretive process where participation is limited to those who can physically travel from place to place and it's painfully obvious that the quality of features in C++ are much lower than what they could have been otherwise.

A common claim made by the ISO C++ committee regarding criticism of the language is that these guys are volunteers working in a mostly unpaid capacity on the language, and often have to hit tight deadlines to have any shot at getting a feature into the standard, and that's true exactly because of how arcane the ISO standardization process is. It's this pseudo-antagonistic process where maybe one or two individuals are tasked to "champion" a paper in front of their peers and then everyone is supposed to pretend that there's no politics involved and that the paper gets approved entirely on its technical merit.

C++ would have been much better served from ditching that and doing what Java, Python, and Rust do, have broad community feedback and input. It's hard to imagine what beneficial features are in C++ that would not still be there had there been involvement from the broader community of game developers, embedded device developers, desktop software developers and a host of people who use the language regularly, but it's clear many clumsy and awkward features would have been eliminated, including the now 50 ways of initializing variables, broken standard library features like std::variant, the now unusable std::regex, the minefield that is std::random, the upcoming bloated and error prone std::ranges, it's no wonder many C++ development teams are skeptical of the utility of the standard library and just roll their own alternatives.

I hope no other language goes down the road of using ISO to standardize its language.




C++ at least had the benefit of industry and OOP trend behind it. It was rather fortunate in this case.

I've argued elsewhere on HN that Common Lisp died because they were closed and secretive at the exact moment they needed to go the opposite route.

https://www.cs.cmu.edu/Groups/AI/html/faqs/lang/lisp/part4/f...

Around 2004, Lisp was having a bit of a revival of sorts. Lisp was becoming trendy, various blogs and web sites were created. But the only documentation you could find was the HyperSpec. Which, as anyone that had the misfortune of reading, is awful as a reference. It's both too technical for casual software developers and not official enough for language implementers. There were two free, open source Lisps available (CMUCL, CLISP) and both were rather unloved and clunky at best.

Even Linus had a bit of trouble getting his hands on POSIX standards. Imagine Linux dying because it couldn't follow standards that Linus could not acquire.

By the mid-to-late '90s the writing was already on the wall. Perl, Python, PHP, Ruby followed no standard. It became common for the free implementation to be the standard.

Clojure arrived and largely filled the Lisp void. Racket attempted a similar movement, by renaming itself from PLT Scheme to something that removes the emphasis on any particular standard. If you want Lisp today, though, you're probably doing Clojure.


> I've argued elsewhere on HN that Common Lisp died

It's not dead, it just smells funny.

> not official enough for language implementers

The HyperSpec is just a different text rendering -> here as Hypertext. The content of the official Common Lisp standard document is in the HyperSpec. As a language implementor it makes no/zero/null difference reading the HyperSpec or the official standards document in PDF format.

> Which, as anyone that had the misfortune of reading

I like it and have been using a lot.

> There were two free, open source Lisps available (CMUCL, CLISP)

GCL, OpenMCL, ECLS, SBCL, ...


I agree with the thrust of your argument as I was learning Common Lisp around then but you make a factual mistake:

> Around 2004... There were two free, open source Lisps available (CMUCL, CLISP) and both were rather unloved and clunky at best.

SBCL and ECL existed and were quite usable.

The documentation was as you say, another story. There was the HyperSpec, copies of CLtL, and a number of out-of-print or hard to find books that described some kind of Lisp. It was hard to be a tyro without a guide.


Honestly, attempts to pin down Lisp's obscurity to a single cause are almost as old as Lisp itself. No, Lisp did not miss out because people could not read the docs; it simply was as far from XML/Java zeitgeist then as from UNIX/C/C++ before.

> Perl, Python, PHP, Ruby followed no standard. It became common for the free implementation to be the standard.

And I mean look how well it worked with transitions to Perl 6 and Python 3.


> Which, as anyone that had the misfortune of reading, is awful as a reference. It's both too technical for casual software developers and not official enough for language implementers.

I don't think this is fair. It is a very useful reference, and it doesn't claim to be a tutorial:

" 1.1.1 Scope and Purpose The specification set forth in this document is designed to promote the portability of Common Lisp programs among a variety of data processing systems. It is a language specification aimed at an audience of implementors and knowledgeable programmers. It is neither a tutorial nor an implementation guide."


C++ was pretty horrible prior to C++98 as well (and likely 33 out of 50 ways to initialize a variable already existed by then). It did improve considerably within the past decade under the auspices of the committee.


std::string has been in C++98, however, you couldn't use it to create a std::ifstream as it had only char* constructor.

It took a decade to add a std::string constructor...


And it will probably take another decade to add a std::string_view constructor.


One of the biggest impediments caused by the C++ standards committee is the tight scope of the standard. Anything outside of the language/library spec is entirely outside the realm of standardization.

So any tooling improvements (dependency management, build process) are not able to be made to the language. Fractured solutions harm adoption, languages with good dependency management and good build processes have one solution they push on everyone.


Its clear that C++ is very popular, and a lot of people chose to use it, for better or for worse. I'd like to hear your thoughts referencing specific issues you have w.r.t. the language (not libraries). As it stands, I'll take your comment as a passionate plea :)


I’m not sure if you meant it, but your comment comes off as being in exceptionally bad faith. Of course C++ is a popular language and people put up with the decisions ISO makes-that doesn’t mean it doesn’t have issues. The distinction between “the language” and “libraries that ship with the language” is not useful but even if it was the comment presents initialization a wart in the core language.


>The distinction between “the language” and “libraries that ship with the language” is not useful but even if it was the comment presents initialization a wart in the core language.

I think it is extremely useful, as the standard library is easily (and often) replaced. We have a difference of opinion, and that is perfectly fine, but please do realize that this doesn't mean my comment is in bad faith.

Every library has a goal, and the standard library's goal is not to serve as a industry-ready plug-in for high-performance code. Writing high-performance code in C++ is an advanced task that necessitates more control, which makes the standard library not a good fit. The model of (language) + (library) makes it easy for the end user to pick a library of their choice, for their application.

>comment presents initialization a wart in the core language.

Okay, that is one point. When people say "X sucks" or "X is horrible" there is very little a reader can gain out of that. If a person's opinion is formed by deep experience with that X, then I am interested in knowing specifics that lead to that opinion.


The distinction is useful in the context of a discussion about alternative standard libraries, but in the context of the ISO working group that does the design for both it is not.

I understand that you intention was to get more information, but I wanted to make sure you're aware that responding to someone who is listing out their complaints with what is essentially "a bunch of people use the language productively, can you please give me information that you just went over, otherwise I am going to disregard your comment" can be interpreted as bad faith because it's a common troll/asymmetric effort tactic.


Thanks. I'll try to better phrase my comments so as to avoid misinterpretations :)


> it's painfully obvious that the quality of features in C++ are much lower than what they could have been otherwise.

It's not obvious to me at all; in fact I'm more tempted to believe the opposite. C++ has its shortcomings, but when I (say) compare the C++ standard library against third-party libraries, I find the standard library design & implementations to be of much higher quality. They're often far more flexible and handle far more edge cases than open-source libraries do. So, while I would love for the C++ standard to be free, I think this would be more of an argument for not making it so.


Any specific examples come to mind? I usually find Folly [1], Abseil [2] or the EASTL [3] beat the standard library on almost every metric you can imagine, including the under appreciated compile time metric.

And then of course there's boost [4], but people have very mixed opinions about it.

The reason a lot of developers use the standard library in C++ is because dependency management in C++ such a nightmare that many people writing a library are forced to use the standard if they want any hope of adoption even when far superior options exist. It's literally something people writing C++ libraries will advertise "Dependency free header only library!" because they know without that a lot of developers won't bother using it.

Anyways, I would be interested to know what part of the standard library you find is better than third party options.

[1] https://abseil.io/

[2] https://github.com/facebook/folly

[3] https://github.com/electronicarts/EASTL

[4] https://www.boost.org/


Before I give an example, note a couple of things:

- You've picked some of the best C++ libraries as if they're somehow representative of the ocean of C++ code that's out there, whereas I was talking more about the general landscape.

- It's hard to do an apples-to-apples comparison for a library (like Abseil) that tries to avoid replicating what's already in the standard library, so those aren't necessarily the best examples to discuss here.

That said, OK, here's one trivial example. It is a major time sink to dig these up and write self-contained examples for the sake of argument (it took me 1 hour to write this entire comment), so I hope this example can be sufficient to get my point across.

So folly has FBVector and it supports custom allocators, right? OK, so just try making a vector with a custom pointer type:

  #include <vector>
  
  // #include <folly/FBVector.h>
  
  template<class T>
  struct my_pointer
  {
   typedef my_pointer this_type;
   typedef T element_type, value_type;
   typedef ptrdiff_t difference_type;
   typedef this_type pointer;
   typedef value_type &reference;
   bool operator!=(this_type const &other) const { return !(*this == other); }
   bool operator==(this_type const &other) const { return this->p == other.p; }
   difference_type operator-(this_type const &other) const { return this->p - other.p; }
   explicit operator bool() const { return !!this->p; }
   friend this_type operator +(difference_type n, this_type me) { return me + n; }
   explicit my_pointer(T *p) : p(p) { }
   my_pointer(std::nullptr_t p) : p(p) { }
   my_pointer() : p() { }
   reference operator *() const { return *this->p; }
   template<class U> using rebind = my_pointer<T>;
   this_type &operator++() { ++this->p; return *this; }
   this_type &operator+=(difference_type n) { this->p += n; return *this; }
   this_type &operator--() { --this->p; return *this; }
   this_type &operator-=(difference_type n) { this->p -= n; return *this; }
   this_type operator+(difference_type d) const { return this_type(this->p + d); }
   this_type operator++(int) { this_type copy(*this); ++*this; return copy; }
   this_type operator-(difference_type d) const { return this_type(this->p - d); }
   this_type operator--(int) { this_type copy(*this); --*this; return copy; }
   value_type *operator->() const { return this->p; }
   reference operator[](difference_type d) { return this->p[d]; }
  private:
   T *p;
  };
  
  template<class T>
  struct my_allocator
  {
   std::allocator<T> base;
   typedef T value_type;
   typedef size_t size_type;
   typedef my_pointer<T> pointer;
   template<class U> struct rebind { typedef my_allocator<U> other; };
   pointer allocate(size_type n) { return pointer(this->base.allocate(n)); }
   void deallocate(pointer p, size_type n) { return this->base.deallocate(&*p, n); }
  };
  
  namespace std
  {
   template<class T>
   struct iterator_traits<my_pointer<T> >
   {
    typedef typename my_pointer<T>::pointer pointer;
    typedef typename my_pointer<T>::reference reference;
    typedef typename my_pointer<T>::value_type value_type;
    typedef typename my_pointer<T>::difference_type difference_type;
    typedef std::random_access_iterator_tag iterator_category;
   };
  }
  
  int main()
  {
  #ifdef FOLLY_CPLUSPLUS
   folly::fbvector
  #else
   std::vector
  #endif
    <int, my_allocator<int> > numbers;
   numbers.push_back(1);
   return 0;
  }
This compiles and runs fine with GCC, Clang, and MSVC (on, say, C++17).

But now try to uncomment the #include so that it uses folly and you suddenly get errors like this:

  /usr/include/folly/FBVector.h:148:27: error: no viable conversion from 'folly::fbvector<int, my_allocator<int>>::Impl::pointer' (aka 'my_pointer<int>') to 'int *'
            S_destroy_range(b_, e_);
  /usr/include/folly/FBVector.h:365:34: note: passing argument to parameter 'first' here
    static void S_destroy_range(T* first, T* last) noexcept {
Funny, so S_destroy_range is used internally, and it requires raw pointers. But who says my fancy pointers will even necessarily map 1:1 to a linear (and contiguous!) pointer space?

What we see here is folly pretends to support custom allocators, but it cuts corners internally. (!) Which is awful not only because of the inflexibility, but because it misleads you, too. If they're introducing unfounded assumptions internally just out of sheer convenience, how am I supposed to trust the implementation? Heck, if I had implemented implicit conversions to raw pointers so that the code compiled, I might not have even discovered there's a latent bug in my program.

In contrast, in my experience, actual standard library implementations pay attention to the details and don't tend to cut corners like third party libraries do.

Now, again, keep in mind this is what we get with some of the best libraries, whereas I was talking about the general landscape, so the situation isn't even remotely this good on average.


I do appreciate the effort you went to for this, but you make a lot of very strong claims that depend on very obscure minutia and unfortunately they end up being false under careful scrutiny.

>But who says my fancy pointers will even necessarily map 1:1 to a linear (and contiguous!) pointer space?

The standard as of C++11 does. std::vector<T> provides the member function T* data() which is required to return a pointer to the first element of a contiguous memory region spanning the entire vector:

https://en.cppreference.com/w/cpp/container/vector/data

Note that it specifically returns a T* rather than a my_pointer<T>, or what is referred to in standardese as a "fancy pointer":

https://en.cppreference.com/w/cpp/named_req/Allocator#Fancy_...

>What we see here is folly pretends to support custom allocators, but it cuts corners internally.

No such corners are cut. Folly has full support for custom allocators including fancy pointers and in this case it's giving you a compile time error for what could have been undefined behavior. That's an advantage in my book, I'll take compile time error over hard to debug memory corruption any day of the week.

Fancy pointers to T are required to be implicitly convertible to T* and while the small snippet of code you pasted doesn't make use of this requirement in GCC and clang, MSVC does make use of this requirement and you can see as per the link below that your allocator fails to work:

https://godbolt.org/z/7KrE41

You'll notice the error is that std::vector is trying to convert your fancy pointer into a raw pointer in order to call _Delete_plain_internal:

>C:/data/msvc/14.28.29910/include\vector(683): error C2893: Failed to specialize function template 'void std::_Delete_plain_internal(_Alloc &,_Alloc::value_type *const ) noexcept'

Other parts of the standard that depend on this are node based containers such as std::list, std::set, and std::allocate_shared also makes use of this requirement.

The fact is that writing a standard conforming allocator that uses fancy pointers is incredibly difficult and error prone, the standard is not clear about the rules with numerous defect reports related to fancy pointers and as of today no compiler actually has full support for it:

https://quuxplusone.github.io/draft/fancy-pointers.html

The fact that you could have picked any example to prove your point about the standard library and the one you chose to produce took you a very long time to exercise what is a fringe and corner case example of a defective language feature that you do not fully understand is not a particularly convincing argument that the standard library provides the high quality implementations and APIs.

That may seem blunt and harsh, but you're in good company since I myself also don't fully understand it, and anyone who's being honest would also admit that fancy pointers in C++ are an area that quite possibly no one really truly understands.

Before jumping to the conclusion that Facebook got it wrong, they cut corners, and their code is not trustworthy over this, perhaps it might be best to pause a moment; this issue has nothing to do with Facebook and instead is merely a reflection of the sheer complexity of C++ as a language.


> The standard as of C++11 does. std::vector<T> provides the member function T* data()

I actually do believe data() is a defect in the standard. It should be returning 'pointer', not T*. In fact I believe the entire contiguity requirement is a defect, as there's no inherent reason for a vector to require physical contiguity in memory to begin with. Contiguity needs to be with respect to the pointer. Getting to that point is not trivial, though, given so many implementations have assumed raw-pointer-like behavior in the past, so that likely plays a role in why they use raw pointers in places like this one.

That said, you do have a point here, in that the standard requires physical contiguity in memory for std::vector, so this isn't a bug as far as memory corruption goes. In my rush to write up an example, I forgot about this with respect to std::vector, and I just assumed data() would return 'pointer' as would be common sense.

> Fancy pointers to T are required to be implicitly convertible to T*

Do you have a link? I'm failing to see this in [1] or [2]... is it cited elsewhere? Its existence would seem to render the addition of std::pointer_traits<Ptr>::to_address() rather redundant.

> Folly is giving you a compile time error for what would have potentially been undefined behavior.

folly is not giving me that error though... my own code is. By not defining an implicit conversion to a raw pointer. If I had added that, which many would, then I wouldn't have gotten the error.

> You'll notice the error is that std::vector is trying to convert your fancy pointer into a raw pointer in order to call _Delete_plain_internal:

We got different behavior because you're using the debug runtime (/MDd) and I wasn't (/MD). I tried it without and didn't get that. So yes, MSVC also gives an error with the debug runtime.

> the one you chose to produce took you a very long time to exercise

No, this is twisting what happened. It took an hour to write, not to "exercise" the 'corner case'. And it takes a long time to write only because C++ is so extremely verbose and it takes almost 100 lines of code just to implement a simple example around a trivial pointer wrapper. Practically everything in the example was basic boilerplate. And even then I later noticed I still missed other boilerplate like the < and > operators.

> The fact that you could have picked any example to prove your point about the standard library [...] to exercise what is a fringe and corner case

Oh come on. I literally said in my comment "they're often (read: not always) far more flexible and handle far more edge cases than open-source libraries do." You cherry-pick some of the absolute best C++ libraries out there as if they're somehow representative of the landscape, then force me to whip up a counterexample for you on the spot as if I have one lying around at my fingertips for every library. And when I nevertheless try to find something to give you an example of like you asked, you complain that it's... a corner case? Didn't I say that it's an edge case to begin with? And isn't this doubly ironic when you yourself cherrypicked libraries that were very much "edge" cases to begin with in terms of their high quality?

> fancy pointers in C++ are an area that quite possibly no one really truly understands.

This is a weird way to put it. This isn't something where C++ is just too complex for mortals to comprehend; it's something where the standard itself has shortcomings (like the ones you yourself linked to). The standard needs to simultaneously (a) provide some kind of generality and usefulness, while (b) attempt to address past issues in a mostly backwards-compatible way. Which is intrinsically hard because in the past it has made assumptions that probably shouldn't have in hindsight. All of which I'm more than happy to acknowledge; I never claimed the standard is flawless or that somehow easy to improve it when it might break a ton of old code.

What you need to realize about fancy pointers in particular is that part of the very reason they're under-utilized is their poor support, not because they're somehow fundamentally a "fringe" concept for people to want. (Unless you think nonstandard allocators are weird altogether, in which case I'm talking to the wrong person.) The commonly-cited use cases (like shared memory) are far more obscure than some fairly normal things you can do with them. For example, they're invaluable when you're debugging custom allocators; you can put things like bounds-checking into them to make sure the rest of your allocator is correct (in fact I'm pretty sure I was doing exactly this not too long ago). But to be able to do anything with them you need a container that is flexible and careful enough to not treat them interchangeably raw pointers.

But this is digging too much into fancy pointers and missing what I was trying to say, and it's making me waste hours more on this than I ever intended to. I was saying, in general, with things that require more flexibility than the obvious implementation would imply, I tend to find better support among standard libraries than third-party ones. This statement obviously has nothing to do with Facebook or folly in particular; you just picked 3-4 extremely high-quality libraries and made me fit what I was saying into a mold that's already a cherrypicked outlier, so I tried to come up with something for you on the spot to get my point across. Whether you think it was a good example or not, it's very much missing a point I was making about 3rd-party libraries in general. We can literally even assume those 4 libraries are flawless, and even then it would hardly even matter for a discussion that's about the ISO standard and its impact on the average C++ library.

P.S. There was a typo in my example for 'rebind'; you need to put U instead of T. You'll need to fix that and other uninteresting stuff (like operator<) to address errors with other containers.

[1] https://eel.is/c++draft/allocator.requirements

[2] https://en.cppreference.com/w/cpp/named_req/Allocator


Basically I ask you to give examples to showcase how the standard library handles flexible edge cases with high quality, and when I point out how the example you gave is fundamentally flawed your counter argument is that for your own example, and you could have picked anything: the standard is defective and goes against common sense, it takes 100 lines of code and an hour to implement a trivial example to showcase how flexible it is, it has made assumptions that it probably shouldn't have in hindsight, and a host of other reasons that basically showcase that the standard library isn't nearly as flexible or high quality as you made it out to be.

It was your example to give and it turns out that just providing a basic example requires all this complexity, exposes all these defects, isn't standard compliant and not portable across compilers.

You are certainly welcome to your opinion and I doubt either of us are going to convince one another at this point... but I am fairly confident most sensible people would not look at the example you chose to showcase and think "Wow, what a flexible and powerful API the standard library provides, very high quality." They will come away thinking that your example is everything wrong with C++; it's convoluted, error prone, and incredibly fragile.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: