Hacker News new | past | comments | ask | show | jobs | submit login
The Convergence of Modern C++ on the Lisp Programming Style (chriskohlhepp.wordpress.com)
97 points by effdee on April 20, 2014 | hide | past | favorite | 75 comments



Saying it's converging on a Lisp is flexing a little bit of artistic license. It's fair to say C++ is becoming increasing accommodating to whatever you want to make of it if you're willing to compromise and roll with it. For sure, it's one of the very best and most practical languages for writing extremely efficient implementations of algorithms.

I highly recommend watching Alex Stepanovs A9 lecture series, "Efficient Programming with Components" [0], where he covers the humble min() function, and variants thereof, for at least ~20 hours. During this time he also takes a minor digression in to writing a memory pool for linked-list nodes, while pointing out similarities in the design to a Lisp. It's extremely 'instructive', as he would say, in understanding how he writes C++ code. The sheer amount of code will horrify you, it may even seem unproductive, but I think his objective was really to get everyone thinking about the beauty of the algorithms and the details rather than the objective. His concern for performance, correctness of code, and composition of algorithms is interesting and something I haven't seen presented in real time, in C++, in such a manner before. One example for instance, are occasions where he insists on writing functions backwards, from the return statement up.

If you want a higher level overview on his opinions of C++ and programming etc. The less intense series, "Programming Conversations" [1], is worth a more casual watch.

[0] https://www.youtube.com/playlist?list=PLHxtyCq_WDLXryyw91lah...

[1] https://www.youtube.com/playlist?list=PLHxtyCq_WDLXFAEA-lYoR...


> It's fair to say C++ is becoming increasing accommodating to whatever you want to make of it if you're willing to compromise and roll with it.

It's also one of the few languages in common use that allows using a wide range of abstraction levels, whatever is desired - I'm not aware of any other language that lets you write inline Asm, procedural, OOP, and functional-style code, all conceivably in the same source file.


>I'm not aware of any other language that lets you write inline Asm, procedural, OOP, and functional-style code, all conceivably in the same source file.

Gambit-C.


http://www.pvk.ca/Blog/2014/03/15/sbcl-the-ultimate-assembly... SBCL: The Ultimate Assembly Code Breadboard


I think this was on HN earlier too. It's slightly different from what I had in mind, however; generating and then running Asm from Lisp is not quite the same as being able to embed instructions inside any function that manipulate the contents of lists directly, which can be easily done in C/C++.


Here is an example of doing exactly that: http://benchmarksgame.alioth.debian.org/u32/program.php?test...


Note that the C++ is half the code and 6 times as fast. This supports my general impression that you can torture decent performance out of a Lisp compiler, but the result will be ugly, and you'll rarely match C.


I see the ratio as 28.39 sbcl to 10.61 for g++, closer to 3 on the 64 bit version.

At any rate, I was addressing generating and then running Asm from Lisp issue.


Now you are aware of another, Common Lisp (as implemented by SBCL):

http://www.sbcl.org/sbcl-internals/index.html


You should underline that the ASM part is extremely implementation dependent. There is absolutely no language feature, or even a convention, to have inline ASM.

Also I'd posit that inline ASM is a lot easier to reason about in C++ than it is in Lisp, given to the rigid type structure. You're bound to shoot yourself in the foot in Lisp, even more than in C++.


Inline ASM is implementation dependent in C++ too, isn't it?


Yes, but the two big names in C++ compiler technology—GCC and Clang—are largely compatible [1]. This is not the case with Lisp.

[1] http://clang.llvm.org/compatibility.html#inline-asm


> Also I'd posit that inline ASM is a lot easier to reason about in C++ than it is in Lisp, given to the rigid type structure.

I believe it's also a matter of how these languages were designed; Lisp started out as exclusively high-level, while C++ came from C which came from Asm. In other words, the features in C++ almost all have direct mappings to how it could be done in C, and going from C to Asm is also quite direct. In contrast, Lisp -> Asm is a huge conceptual leap.

I think the much-maligned complexity of C++ is because it grew in this step-by-step fashion and exposes all these levels of abstraction so you can easily reason about the tradeoffs involved and understand the code the machine will eventually be running. More importantly, you don't have to use all of them, but this is what makes the language so flexible and accommodating to a wide audience.

(Personally I prefer C/Asm more because I don't operate at such high abstraction levels often, and when I do, the features of C++ are not a good fit for what I want to accomplish. I have used closures in C and done functional-style there, but it doesn't seem like I need to do this often enough to move to C++ or a Lisp.)


Lisp had close ties to ASM early on - car and cdr refer to particular parts of a register in a 1950s era IBM computer, 20 years before C existed.


Those are no "ties to ASM" in the sense "easy to embed ASM in this language".

Those are merely implementation details of early Lisp leaking to the language naming.


If that were true it would have been awfully difficult to write operating systems in Lisp, yet OSes were written in Lisp (and the companies that famously did so played a part in the founding of the Free Software Foundation).


Still it is a common feature in Lisp systems to offer inline assembler or inline C.


He did say "languages in common use". CL in spite of its name is still a bit uncommon.


the true scotsman again?


This is true, and I personally find it canbe a tarpit for becoming incapacitated by design choice. There's an extreme temptation, because its so easy, to attempt to either go deep, or build out a se of abstractions. Both can be huge time sinks if attempted foolhardy.


Have you heard of Rust? It has all of the above and more - it has compile-time memory safety/safe concurrency (without garbage collection forced on you), algebraic data types, type inference, pattern matching, etc.


This article seems to put Lisp on a bit of a pedestal when it comes to types, but languages like Haskell blow Lisp out of the water with respect to expressiveness. There's also a lot in this article that's plainly implying the wrong things about what Lisp is capable of doing.

If we are to talk about types, I'd hope people would look at languages like Haskell or Standard ML—or maybe even ATS, Agda, or Coq—for future direction, not Lisp.

Unlike Lisp, C++ lets one write efficient, generic algorithms that can operate on several types of data. Lisp cannot, and falls back to dynamic type testing, which makes for slower code[1]. Basically your only option in Lisp is to specialize everything manually, or inline everything. Both approaches are extremely poor. As I hinted, Haskell and Standard ML do an even better job than both C++ or Lisp. This is talked about a good bit in this article [2].

(Also, for the record, Lisp has no concept of information hiding, interfaces, or APIs. And no, Lisp's packages are just convenient structures for grouping somehow related symbols together.)

The article talked about declaring function types in Lisp. Many implementations plainly ignore those and they provide no value. It doesn't help that idiomatic Lisp code usually doesn't include such declarations.

Lastly, a lot of the things in this article were SBCL specific, and had little to do with the Lisp language itself.

Like others have said, I don't think any points demonstrate that C++ is converging to Lisp. And if it were, that's probably a bad direction anyway, given the author's initial statements about heading toward an algorithmic language.

[1] No implementation of Lisp has true, flexible compile-time polymorphism or any type-algebraic structure; all inferred types must be concretely realized during the inference.

[2] http://symbo1ics.com/blog/?p=1495


What is up with Haskellers starting these irrelevant Lisp vs Haskell arguments? OP made no mention or implication that Lisp is better than Haskell. It's petty and counterproductive.


This isn't a Lisp versus Haskell argument. It's a matter of fact that the expressiveness of Haskell's (and others') type systems are more expressive than Lisp's, practically (and almost theoretically).

Also, I'm not a "Haskeller". I do program in Haskell occasionally, though. I'm actually, at this time, a professional Common Lisp programmer.


And how is that relevant to the OP? The OP made no counterclaim.


My bringing up of Haskell and friends wasn't there to counter, but to merely observe that there are better things surrounding type systems than what is provided by even Lisp. And I wished to add that if one is to look for inspiration on building expressive type systems, then Haskell should be the place to look, not Lisp.

I also wanted to provide Haskell as a kind of system which provides an inference engine that is much stronger than Lisp's. I don't want anyone to get the idea that Lisp implementations' inference engines is even commensurable to what exists elsewhere.


It is relevant to the discussion as an example.


> Also, for the record, Lisp has no concept of information hiding, interfaces, or APIs. And no, Lisp's packages are just convenient structures for grouping somehow related symbols together.

Dude was talking about Common Lisp which has (IMO) one of the best object-oriented systems in history, from the I-get-so-happy-when-I-can-use-it perspective.


Common Lisp's object system, CLOS, is indeed a rich and expressive system. But you'll also notice that CLOS does not provide any of the above points. It is an implicit contract between the programmer who developed the code and the programmer consuming the code to (1) not use information that is supposed to be hidden, (2) understand that exported methods are those that make up the "interface" to a class, and (3) understand that the API is that which is written in documentation.


I rarely need to dig into the implementation details of a class but when I do, if the language gets in my way, that's probably a bad thing. People make mistakes in API design and strict API contracts as a feature of the language can be annoying. FWIW, I remember only one case of having to do #define private public in my 10+ year career when dealing with a third-party library. The point is that the API should be a strong suggestion, but not necessarily strongly enforced.

The arguments you make apply equally to Python, btw.


The latest revision to the C++ standard, including polymorphic lambdas, makes the xplusone definition much terser:

    auto xplusone = [](auto x){ return x + 1; };
While at it, here's how Paul Graham's accumulator generator looks in C++14:

    auto foo = [](auto n){ return [=](auto i) mutable { return n += i; }; };
This does not affect the point of the article in any major way; just a curiosity.


At the risk of getting downvoted for what could be a humongous display of ignorance, although the theoretical grounding of the article seems interesting, I fail to see how any of this can get applied to "real-life" applications when this is basically ten pages of content to get to apply this to addition operation and for other problems which are, for most day-to-day practice, already solved quite succintly.

Perhaps it's just me, but it seems like an awful lot of conceptual baggage to do things that can be expressed with much greater simplicity and without resorting to concepts that need multiple years of expert knowledge of the language to get this "elegance". And i understand the theoretical elegance, it's just that I have to ask myself if this truly makes an actual difference in code style, simplicity, and clarity of language.


No, you are exactly right. The C community and its progeny has, over the years, dug itself into a deep, deep syntactic and semantic hole. It is now coming to realize that the Lisp folks had some good ideas after all, but because of the sunk cost fallacy the C folks are unwilling to just back out of the hole. Instead, they keep digging the hole deeper and deeper in the hopes that some day it will emerge into the light. C++11 is interesting in the same way that Brainfuck is interesting: it's remarkable that you can take such a horrible mess and make it do cool things. But it's just a parlor trick, kind of like escaping from a straightjacket while handcuffed. It's challenging, and it requires skill, but at the end of the day you're in the exact same place as if you'd never put the handcuffs and straightjacket on to begin with.


> at the end of the day you're in the exact same place as if you'd never put the handcuffs and straightjacket on to begin with.

Do you have any suggestions on what tools the people doing 'parlor tricks' with C++11 should be using to accomplish them, then? That is, what would you suggest to get to that place without the straightjacket?

Note that at a bare minimum, these supposed tools should give the ability for fine-grained manual resource control, zero-(runtime)-cost abstractions, and performance roughly on-par with C++. As evidence that these hypothetical tools work well for the purpose, we could look for some complex and high-performance software written in them: say, a browser engine, a 3d engine, a kernel, etc., but wait a minute -- these are all things that tend to be written in C++ or plain C.

Instead of glibly dismissing the language that's used to implement, say, every major browser engine, wouldn't it be more productive to ask questions about why people use it? Bonus points if the answer is something more realistic than "They don't know Lisp".

That way, you end up trying to figure out how to make a replacement for it that is an actual replacement -- Rust is a fantastically exciting example of this.

It'll be great when we can all move away from C++, since it's a colossal clusterfuck of counterproductive complexity, but "C++ is a bad language" misses the point in a really uninteresting kind of way.


> Do you have any suggestions on what tools the people doing 'parlor tricks' with C++11 should be using to accomplish them, then? That is, what would you suggest to get to that place without the straightjacket?

Duh, Lisp of course. (Or Haskell.)

> Note that at a bare minimum, these supposed tools should give the ability for fine-grained manual resource control

Check. Lisp provides garbage collection, but you don't have to use it. It's perfectly possible to write Lisp programs that do manual memory management. It isn't often done because it's hardly ever a win, but if you really want to you can.

> zero-(runtime)-cost abstractions

a.k.a. macros

> and performance roughly on-par with C++.

The SBCL compiler is pretty good. But one of the reasons that Lisp code is not generally as fast as C/C++ is that Lisp code is safe by default whereas C/C++ code is not. You can make Lisp code unsafe (and hence faster) but you have to work at it, just as you can make C/C++ code safe, but you have to work at it. I submit that in today's world, being safe and a bit slower by default might not be such a bad place to be in the design space.

> these are all things that tend to be written in C++ or plain C.

There is a world of difference between C++ and plain C. C is actually not a bad language if you want to write fast code with relatively little effort and don't care about reliability or security. The value add of C++ over C is far from clear. (I don't know of any OS written in C++. Linus wrote a famous rant about why Linux is written in C and not C++. There are, however, examples of operating systems written in Lisp.)

> wouldn't it be more productive to ask questions about why people use it?

I know why people use it: it's fast, there is a huge installed base, and it's an excellent platform for studly programmers to display their studliness. That doesn't change the fact that C++ has deep design flaws which result in its being incredibly hard to use and extend. And the existence of coders studly enough to be productive in C++ does not change the fact that it imposes an extremely high cognitive load on its users.


> a.k.a. macros

No, macros are not what I'm talking about here: I mean that C++ provides abstractions that only impose runtime costs if you use them. For instance, the cost of vtable lookup is only paid if you are using virtual functions; otherwise, you don't have any overhead for function calls beyond what's imposed by the hardware.

As noted elsewhere in the thread:

> Unlike Lisp, C++ lets one write efficient, generic algorithms that can operate on several types of data. Lisp cannot, and falls back to dynamic type testing, which makes for slower code[1]. Basically your only option in Lisp is to specialize everything manually, or inline everything. Both approaches are extremely poor. As I hinted, Haskell and Standard ML do an even better job than both C++ or Lisp. This is talked about a good bit in this article [2].

That's what I mean when I say 'zero-cost abstraction'.


Lisp is exactly the same. You only pay the run-time cost of generic functions and dynamic type dispatch if you use them. What many people get hung up on is that in Lisp you get generic type dispatch by default, so to not pay that cost you have to do some work (declare types).


Operating systems: BeOS was written in C++. Genode[0] appears to be written in C++ too. (I'm pretty sure there are quite a few more, but that'll do as an existence proof.)

[0] Granted, that's an "operating system framework", but it's definitely at the same "level" as implementing an OS.


> Check. Lisp provides garbage collection, but you don't have to use it. It's perfectly possible to write Lisp programs that do manual memory management. It isn't often done because it's hardly ever a win, but if you really want to you can.

How does LISP work without a garbage collector? Closures without a garbage collector are pretty awful. Let's keep in mind that Rust gives us a pretty good idea of what a safe system without a GC looks like, and it doesn't look anything like LISP.


> How does LISP work without a garbage collector?

The same way any other language works without one: you allocate the storage you need and manage it yourself. It's not pretty, but it can be done. The resulting code ends up looking an awful lot like the code in any other imperative language. (Math gets a little tricky because you have to be careful not to inadvertently create bignums, but other than that it's pretty straightforward.)

> Closures without a garbage collector are pretty awful.

> Rust gives us a pretty good idea of what a safe system without a GC looks like

And yet, Rust has closures :-) (And indeed, they are pretty awful.)

Writing non-consing code in Lisp is no different from writing non-consing code in any other language. You can produce stack-allocated closures that get cleaned-up on function return, just as in Rust. If you want to write non-consing code in Lisp (or any other language) you just can't use first-class closures.


Right, I guess to me it's just almost not worth using a LISP if I can't use any of the features that make LISP enjoyable to use (LISP without conses, closures or any other interesting features). But I suppose in principle a very carefully written imperative LISP program could be pretty fast, sure :) I'd rather write in a language that supports that style of programming natively in that eventuality, though.

(I'd also add that there are some issues surrounding larger unboxed types, but I won't venture to posit how SBCL handles those).


You can have conses, you just have to allocate them all up-front. The when you want one, you don't call CONS, you call MY-CONS, which grabs one of the pre-allocated conses off your free list. Then when you're done with it, you push it back onto your free list so it can be reused. It's no different from having MALLOC and FREE, except that you have to write them yourself (but that's not hard).

But yes, it's a lot easier to write code with a GC than without one. That's true in any language.

But let's not forget that the original article was saying, essentially, "Hey, look, we can make C++ do Lispy things!". My point is just that if you want to do Lispy things it's a lot easier just to use Lisp than to try to shoehorn Lisp's features into C++.


It isn't really, though. If you write your own malloc and free like that (by the way, writing a performant, bugfree, concurrent malloc and free is not that easy :)), you're responsible for safety as well (e.g. use after free bugs) which LISP will no longer protect you against. That's not to mention that LISP has to interact with C on a regular basis for things like system calls, and comes with a runtime that prevents it from playing nicely as an embedded library (perhaps SBCL has a way of running without one, but I can't find it... and that's not really a product of functional-ness or lack of static compilation either either, I have heard from several people who have trouble using libraries built with ghc or Go). And in embedded contexts, you may need hard guarantees that, for example, no dynamic allocation of any sort is done, or that your program doesn't use the stack, etc. To the best of my knowledge, LISP has no facilities for either of these things.

(It does appear that SBCL lets you drop down to assembly, but again if you do that all the advantages of using LISP are gone. Anyway, what is the goal here? Do you really want to use a typed LISP with no lists, with large of featureless statically allocated memory, manually handling concurrency, mutability everywhere, inline assembly, and the inability to use even most of the C++ LISPy features because LISP has no support for using them without runtime costs? Writing a language without resorting to costly abstractions is hard and it was explicitly never a goal of LISP to be one. That's not to mention that in LISP it's nonobvious which features are costly and which ones aren't, so the abstraction it provides over hardware is only theoretical in this context).

In any LISP in a high performance context, you are always paying for things you don't use. You could probably argue that some of the above problems could be mitigated if everyone adopted SBCL as the standard, but unfortunately that's just the way it is in the real world. And while it is unfortunate, the fact is that even all the technical problems could be resolved (I have my doubts), it would be much more irritating to write such low-level systems code in LISP than in a language that wasn't so far removed from the workings of modern computer architecture.

I'm actually a big fan of Lisp and I've found it quite useful for a number of projects, but when you really need to do low-level programming, it is significantly easier in (modern) C++.


> If you write your own malloc and free like that ... you're responsible for safety as well (e.g. use after free bugs) which LISP will no longer protect you against.

That's right. There's no such thing as a free (no pun intended) lunch.

> (by the way, writing a performant, bugfree, concurrent malloc and free is not that easy :)),

It's pretty easy, actually:

(defvar free-list)

(defun initial-malloc (n) (dotimes (i n) (push (cons nil nil) free-list)))

(defun my-cons (car cdr) (setf (caar free-list) car (cdar free-list) cdr) (pop free-list))

(defun free (cons) (push cons free-list))

The reason it's hard to write a malloc for C is that it has to manage variable-length blocks.

> That's not to mention that LISP has to interact with C on a regular basis for things like system calls, and comes with a runtime that prevents it from playing nicely as an embedded library

No, that's just wrong. There's nothing about Lisp that prevents it from being implemented as an embedded library, e.g.:

http://en.wikipedia.org/wiki/Embeddable_Common_Lisp

> You could probably argue that some of the above problems could be mitigated if everyone adopted SBCL as the standard

No, I'm saying use the right tool for the job. If you really need every last bit of speed and you don't care about safety or engineering cost then by all means use C or C++. But if you want safety, reliability, and the sort of run-time dynamism described in the original article you're better off using Lisp or its progeny.

I'm also saying that if you want performance and you also want to use Lisp, you can. But at the end of the day there are fundamental tradeoffs in computing between speed, safety, dynamism, and engineering cost that no language will save you from.


> It's pretty easy, actually:

There are many contexts in which your malloc won't perform well, and many more where it will fall over in a concurrent environment (unless LISP uses atomic operations and locks by default, in which case you have much bigger performance problems to worry about). Concurrency without garbage collection is nontrivial, though I don't blame you for not thinking about it all that much if you rarely interact with such languages. It's great to learn about some of LISP's better-performing utilities (push and pop for example) but let's not get carried away. Also, allocating fixed-length blocks of memory is a perfectly reasonable allocation strategy in C.

> No, that's just wrong.

From the link, embeddable common LISP comes with a runtime, which makes it inappropriate in many contexts. I didn't say that LISP couldn't interact with C (obviously it can!) only that it's not particularly convenient. From the link, it supports inline C, which is great, but again you're not really using LISP at this point.

> No, I'm saying use the right tool for the job.

Oh, sure, I don't think we're disagreeing on that. Certainly most of the prominent Rust developers will immediately point you to a language like Haskell, Nimrod or Python if they will satisfy your usecase. It's just that some of your posts suggested that you think LISP could in principle be used in all the places C++ is used, which I don't think is necessarily true, and certainly it wouldn't be convenient to do so. For people who do have to use C++, I think these LISPy features are a nice way to make the experience more tolerable, and I think that's all the article was getting at.


> it will fall over in a concurrent environment

Good point (but you have that problem in any language). However, PUSH conses, so my code is wrong in that regard (you have to use a pre-allocated free vector, not a free list). So I concede the point: writing your own allocator in Lisp is not trivial. But it can be done.

> I don't think we're disagreeing on that.

Let's just leave it at that for now then.


My best guess about why people are still using C and C++ is this: there is a massive, valuable ecosystem of software written in these languages. It is hard to write software in one language that links with software written in other languages, especially where performance is a concern. The fact that all commonly used commercial OSes are both written in C and expose C APIs has kept C and C++ alive more than anything else.

Performance? Lisp can give you that, as can OCaml, Haskell, and other better languages. Real time system? There is a mountain of research on real-time garbage collection and on using HLLs for real-time systems. Operating system kernel? OSes were once written in Lisp, and OSes could conceivably be written in other HLLs.

Throw in a requirement to interoperate with a C library and suddenly things get ugly. Yeah, sure, you have an FFI, but debugging across a language barrier is difficult (I have had to do it, it is agony). Suddenly you need to worry about pinning objects so that the garbage collector won't move them while some C library expects them to stay still. In some cases your code basically becomes C but with the syntax of an HLL, and you start to wonder why you did not just write that routine in C to begin with (it would have made your life easier). Performance matters but your compiler needs to set up a trampoline so that your code can provide some kind of callback, and now that is a bottleneck that kills all that other optimization work. Then some joker writes some C++ code, and the rest of your week is spent writing wrapper functions because your FFI cannot deal with the name mangler.

At the end of the day there is no particular technical reason for C or C++ to remain so popular, and a big pile of technical reasons to stay away from such languages. C made a bit of sense in the 1970s when computers were small and the understanding of compilers and programming languages was less well developed. At this point C and C++ are a liability that we are all stuck with. Maybe some day the expense of sticking with C and C++ will outweigh the expense required to switch to better languages, but I am not holding my breath.


"Lisp can give you that performance" is, however, more an article of faith than actual reality.

Yes, it comes within a 3x-5x factor, on a good day. And for many applications, that's good enough. However, for either heavy-duty computational tasks or very responsive interactive tasks with a strong computational component, it just doesn't work.

If you have evidence to the contrary (for non-trivial examples), please share it. It's not my love for the exquisite language design that keeps me with C++ :)


Non-trivial examples include:

* High-frequency trading [1]

* 3D graphics and CAD systems by Symbolics

* Operating systems by Symbolics [2]

* Computer algebra [3]

* Supercomputing [4]

* Embedded, real-time forensic fingerprint systems (fingerprint analysis, embedded databases) [5]

* High-frequency auctions

* Performant compilers (most Lisp compilers)

* Perl-compatible regular expressions (sometimes 2x the speed of perl) [6]

And I can assure you, there are extremely many other things.

Generally, if you write absolutely correct and robust C++ code (that ensures there will never be buffer overruns, integer overflow, etc.), you'll see your code will slow down a lot. Lisp ensures these things don't happen (among many other things), and only when you tell Lisp that you are absolutely sure such things cannot happen, then your Lisp code can and often will be competitive with C or C++.

C++ has also benefitted from corporations funding the research and development of the compilers, whereas Lisp hasn't. So, as a result, the speed is partly an artifact of the implementation, not the language.

Lastly, as my own aside, the supposed "raw speed" of C (and lesser so C++) is no excuse to architect an entire system in it. There are hot paths in code that need speed, and perhaps attention should be given to those.

[1] http://www.hpcplatform.com/

[2] http://en.wikipedia.org/wiki/Genera_(operating_system)

[3] http://maxima.sourceforge.net/

[4] http://en.wikipedia.org/wiki/Connection_Machine

[5] http://arxiv.org/abs/1209.5625

[6] http://web.archive.org/web/20080624164217/http://weitz.de/cl...


Rust gives you guarantees at compile time that your program is correct and theoretical performance is above C/C++. I really hope people write OSes in it. Yes, you'll have to have a lot of inline assembly and "unsafe" code, but those things can be very closely looked at. The common glue code can be safe without buffer overruns.


For what it's worth, people have said the same thing about C: "As long as (many) people look closely enough at it, it'll be fine."


responsive interactive stuff could be written in Lisp on mich slower machines than what we have today. Naughty Dog wrote Playstation 1 games (like Crash Bandicoot) in a low-level Scheme inside a Lisp-base IDE.


That may be true, but the article certainly doesn't prove it. The article repeatedly mentions C++11, but as mentioned elsewhere [1], C++11 has lambdas; even if you want to do the partial binding thing, there is no need for a struct, and obviously a static "add one" function can just be declared as a regular function. So what's the point of the struct declaration other than making C++ look bad?

Another sign of the lack of coherence of the article is that it displays unoptimized assembly output for the C++ example, then goes on to praise Lisp for being around as good. The optimal assembly is actually two instructions:

    0000000000000000	leal	0x1(%rdi), %eax
    0000000000000003	ret
Not that it really matters, since it will be inlined into any hot loop in both cases.

[1] https://news.ycombinator.com/item?id=7619116


Sorry to quibble about details, but doesn't "leal 0x1(%rdi), %eax" actually add one to the address %rdi and not to the contents of (%rdi)? It's usefulness for trick additions is because of its origins as "Load Effective Address": it doesn't actually access memory. Their stack gymnastics are silly, but I think you may be stuck with "mov (%rdi), %eax; add $1, %eax; ret" or some other 3 instruction equivalent.


You're right, it does add 1 to RDI, but this is exactly what we want. This is x86_64 calling convention, where arguments come on registers, not the stack. On 32-bit x86 you'd do something like

    mov eax, 1
    add eax, [esp + 4]
    ret


The problem I have with this class of critical analogies is that they don't recognize the underlying goal of all this lipstick on the pig. The parlor tricks being done here combine the benefits of other language concepts with the strict efficiency and deterministic overhead of traditional C. Contrast with other languages that might provide a more elegant interface- but with what kind of indirection? And what sacrifices over full-stack control of that indirection? That's why what gets added to the C++ standard -- specifically the philosophy of that process -- has tangible value and purpose that shouldn't be dismissed by simply saying it's better to throw it all away.


To be fair, the C community doesn't see the relative simplicity of C as a liability, and believes that there's an argument to be made that, in the hands of a competent and careful engineer, C is a good choice of language for certain specific problem domains. There's a related but distinct community that believes that stuffing as many features into their language of choice as possible can only make it better.


Here I was thinking that they finally read the memo.


As a Lisp enthusiast who earns a living coding in C++ I can only say one thing: there's still a long way to go for C++. Very long, indeed.


You're preaching to the choir in my (personal) case, but what specifically do you have in mind?


I'll chime in. Macros and simple syntax.


I last worked with C++ 10 years ago and never looked at it since then, it looks like it's some beast going through various stages of evolution. Perhaps in another 5 or 10 years it will emerge as something beautiful without any vestigial appendages.

What's interesting from a language perspective is that the C++ folks seem to take the approach of "ugly is better" by essentially not breaking backwards compatibility like Python 3 and Perl 6 did. The "clean break" approach seems to create a new species and hope that the old one goes extinct (didn't happen with Perl, probably won't). But C++ just keeps evolving.


This is just embarrassing. Modern C++ is moving toward intricate types like Haskell, not parser and compiler mods like Lisp.


That's because, despite the article, C++ is not actually trying to become Lisp.

And, for all the smug superiority of the Lisp crowd, the niche that C++ is trying to occupy does seem to be significantly bigger than the Lisp niche...


The terminology appears to be somewhat mixed up. xplusone<int> is not a specializaton, just an instantiation of the template.

Specialization is when you tell the compiler to use a separate implementation of a template when some or all of the template arguments match something. Template specialization is used in the type traits section of the post.


For someone who wants to learn "modern C++", particularly for number-crunching, without a strong C background (say... someone who develops predictive models who mostly writes Python and Java, not that I resemble that or anything...) – where would be a good place to start?


I am not the best person to ask, but probably with Stroustroup's recent book on Modern C++. From what I hear, he discusses the philosophy behind C++ and presents things from the perspective of, "Here's how I suggest you use C++ today." (FWIW, I happen to mostly agree with the comments elsewhere in this thread about C++ being a colossal clusterfuck of complexity ... but still thought I'd answer your question best I could).


Yeah, I have that suspicion too, and I secretly hope a language like Julia makes all of this moot: but the question I'm trying to find an answer for is "what language should I use for large-scale-but-not-distributed matrix factorization" without giving up and resorting to Fortran.


Bjarne Stroustrup has a new book that came out with the C++11 standard called 'A Tour of C++'. For someone who is just starting I think its a great starting point as it is literally a tour of C++11 (less than 200 pages). It basically gives a very high level overview of everything C++ has to offer as well as some best practices for 'Modern C++'.

While it will take a lot more that just this book to get anywhere I think its a good starting point for anyone with experience with other programming languages who want to see what C++ has to offer.


0. Books--one of these: http://isocpp.org/get-started

Personally, if you aren't new to programming per se, I'd go with "C++ Primer" by Lippman/Lajoie/Moo since it smoothly integrates modern C++11 throughout the entire text (instead of sticking it into a separate section, as some of the other books do).

After that, "C++ Concurrency in Action: Practical Multithreading" by Anthony Williams: http://www.manning.com/williams/

...and then the rest of the books from the isocpp list (e.g., Josuttis).

1. Libraries:

The rich ecosystem of available libraries is one of my primary reasons for using C++ for numerics :-)

In fact, it's rich enough that it may be best if you were to specify what kind of number crunching you're interested in -- right now I can only try to give you a very broad/big-picture list of some that I've found useful.

The Standard Library supports (P)RNG with a variety of statistical distributions: http://en.cppreference.com/w/cpp/numeric/random

- Boost.Math Toolkit: http://boost.org/libs/math // and more broadly: http://boost.org/doc/libs/?view=category_Math // and even more broadly ;-): http://www.boost.org/doc/libs/?view=categorized - Eigen: http://eigen.tuxfamily.org/ - GPGPU: http://www.soa-world.de/echelon/2014/04/c-accelerator-librar... - MLPACK: http://mlpack.org/ - NLopt: http://ab-initio.mit.edu/wiki/index.php/NLopt_C-plus-plus_Re... - OpenCV: http://opencv.org/ - Odeint: http://www.odeint.com/ - POCO: http://pocoproject.org/ // note: not numerics, but when you need to exchange data over the net/web, these are pretty good for that :-) - QuantLib: http://quantlib.org/ // note: QuantLib is primarily for quantitative finance, but also has math components: http://quantlib.org/reference/group__math.html - SOCI: http://soci.sourceforge.net/ // note: not numerics, but for when you need database access, it has pretty clean API and is easy to use :-)

2. Talks:

* C9 Going Native: http://channel9.msdn.com/Shows/C9-GoingNative

In particular: + "Bjarne Stroustrup - The Essence of C++: With Examples in C++84, C++98, C++11, and C++14" - http://channel9.msdn.com/Events/GoingNative/2013

+ "Sean Parent - C++ Seasoning" - http://channel9.msdn.com/Events/GoingNative/2013/Cpp-Seasoni...

* BoostCon / C++Now!: https://github.com/boostcon/

There's _lots_ of interesting talks, so explore yourself :-)

For instance, 2013 Keynote: "Dan Quinlan: C++ Use in High Performance Computing Within DOE: Past and Future" // http://2013.cppnow.org/session/keynote/

// IMHO, it's worth watching these for staying up to date with the broader developments in the field -- e.g., according to the speaker (given who he is I'd assume credibility) most national labs, including Lawrence Livermore National Laboratory in particular, are quite actively adopting C++ (not C) and have been turning away from Fortran for some time now.

HTH! :-)


Thank you so much!

(To answer your question: linear algebra and optimization. I do a lot of Numpy right now, and I see Eigen in my future...)


I wouldn't even bother. Most people just use C for number crunching.


Well, most of the number crunching code running on the biggest supercomputers today is actually Fortran code - in the meteorology field, for example, virtually all atmospheric models are implemented in Fortran.


My former field's solid-state physics and that's the same deal.


Feels more like Python because now we have lambdas and I can write auto everywhere.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: