Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> at the end of the day you're in the exact same place as if you'd never put the handcuffs and straightjacket on to begin with.

Do you have any suggestions on what tools the people doing 'parlor tricks' with C++11 should be using to accomplish them, then? That is, what would you suggest to get to that place without the straightjacket?

Note that at a bare minimum, these supposed tools should give the ability for fine-grained manual resource control, zero-(runtime)-cost abstractions, and performance roughly on-par with C++. As evidence that these hypothetical tools work well for the purpose, we could look for some complex and high-performance software written in them: say, a browser engine, a 3d engine, a kernel, etc., but wait a minute -- these are all things that tend to be written in C++ or plain C.

Instead of glibly dismissing the language that's used to implement, say, every major browser engine, wouldn't it be more productive to ask questions about why people use it? Bonus points if the answer is something more realistic than "They don't know Lisp".

That way, you end up trying to figure out how to make a replacement for it that is an actual replacement -- Rust is a fantastically exciting example of this.

It'll be great when we can all move away from C++, since it's a colossal clusterfuck of counterproductive complexity, but "C++ is a bad language" misses the point in a really uninteresting kind of way.



> Do you have any suggestions on what tools the people doing 'parlor tricks' with C++11 should be using to accomplish them, then? That is, what would you suggest to get to that place without the straightjacket?

Duh, Lisp of course. (Or Haskell.)

> Note that at a bare minimum, these supposed tools should give the ability for fine-grained manual resource control

Check. Lisp provides garbage collection, but you don't have to use it. It's perfectly possible to write Lisp programs that do manual memory management. It isn't often done because it's hardly ever a win, but if you really want to you can.

> zero-(runtime)-cost abstractions

a.k.a. macros

> and performance roughly on-par with C++.

The SBCL compiler is pretty good. But one of the reasons that Lisp code is not generally as fast as C/C++ is that Lisp code is safe by default whereas C/C++ code is not. You can make Lisp code unsafe (and hence faster) but you have to work at it, just as you can make C/C++ code safe, but you have to work at it. I submit that in today's world, being safe and a bit slower by default might not be such a bad place to be in the design space.

> these are all things that tend to be written in C++ or plain C.

There is a world of difference between C++ and plain C. C is actually not a bad language if you want to write fast code with relatively little effort and don't care about reliability or security. The value add of C++ over C is far from clear. (I don't know of any OS written in C++. Linus wrote a famous rant about why Linux is written in C and not C++. There are, however, examples of operating systems written in Lisp.)

> wouldn't it be more productive to ask questions about why people use it?

I know why people use it: it's fast, there is a huge installed base, and it's an excellent platform for studly programmers to display their studliness. That doesn't change the fact that C++ has deep design flaws which result in its being incredibly hard to use and extend. And the existence of coders studly enough to be productive in C++ does not change the fact that it imposes an extremely high cognitive load on its users.


> a.k.a. macros

No, macros are not what I'm talking about here: I mean that C++ provides abstractions that only impose runtime costs if you use them. For instance, the cost of vtable lookup is only paid if you are using virtual functions; otherwise, you don't have any overhead for function calls beyond what's imposed by the hardware.

As noted elsewhere in the thread:

> Unlike Lisp, C++ lets one write efficient, generic algorithms that can operate on several types of data. Lisp cannot, and falls back to dynamic type testing, which makes for slower code[1]. Basically your only option in Lisp is to specialize everything manually, or inline everything. Both approaches are extremely poor. As I hinted, Haskell and Standard ML do an even better job than both C++ or Lisp. This is talked about a good bit in this article [2].

That's what I mean when I say 'zero-cost abstraction'.


Lisp is exactly the same. You only pay the run-time cost of generic functions and dynamic type dispatch if you use them. What many people get hung up on is that in Lisp you get generic type dispatch by default, so to not pay that cost you have to do some work (declare types).


Operating systems: BeOS was written in C++. Genode[0] appears to be written in C++ too. (I'm pretty sure there are quite a few more, but that'll do as an existence proof.)

[0] Granted, that's an "operating system framework", but it's definitely at the same "level" as implementing an OS.


> Check. Lisp provides garbage collection, but you don't have to use it. It's perfectly possible to write Lisp programs that do manual memory management. It isn't often done because it's hardly ever a win, but if you really want to you can.

How does LISP work without a garbage collector? Closures without a garbage collector are pretty awful. Let's keep in mind that Rust gives us a pretty good idea of what a safe system without a GC looks like, and it doesn't look anything like LISP.


> How does LISP work without a garbage collector?

The same way any other language works without one: you allocate the storage you need and manage it yourself. It's not pretty, but it can be done. The resulting code ends up looking an awful lot like the code in any other imperative language. (Math gets a little tricky because you have to be careful not to inadvertently create bignums, but other than that it's pretty straightforward.)

> Closures without a garbage collector are pretty awful.

> Rust gives us a pretty good idea of what a safe system without a GC looks like

And yet, Rust has closures :-) (And indeed, they are pretty awful.)

Writing non-consing code in Lisp is no different from writing non-consing code in any other language. You can produce stack-allocated closures that get cleaned-up on function return, just as in Rust. If you want to write non-consing code in Lisp (or any other language) you just can't use first-class closures.


Right, I guess to me it's just almost not worth using a LISP if I can't use any of the features that make LISP enjoyable to use (LISP without conses, closures or any other interesting features). But I suppose in principle a very carefully written imperative LISP program could be pretty fast, sure :) I'd rather write in a language that supports that style of programming natively in that eventuality, though.

(I'd also add that there are some issues surrounding larger unboxed types, but I won't venture to posit how SBCL handles those).


You can have conses, you just have to allocate them all up-front. The when you want one, you don't call CONS, you call MY-CONS, which grabs one of the pre-allocated conses off your free list. Then when you're done with it, you push it back onto your free list so it can be reused. It's no different from having MALLOC and FREE, except that you have to write them yourself (but that's not hard).

But yes, it's a lot easier to write code with a GC than without one. That's true in any language.

But let's not forget that the original article was saying, essentially, "Hey, look, we can make C++ do Lispy things!". My point is just that if you want to do Lispy things it's a lot easier just to use Lisp than to try to shoehorn Lisp's features into C++.


It isn't really, though. If you write your own malloc and free like that (by the way, writing a performant, bugfree, concurrent malloc and free is not that easy :)), you're responsible for safety as well (e.g. use after free bugs) which LISP will no longer protect you against. That's not to mention that LISP has to interact with C on a regular basis for things like system calls, and comes with a runtime that prevents it from playing nicely as an embedded library (perhaps SBCL has a way of running without one, but I can't find it... and that's not really a product of functional-ness or lack of static compilation either either, I have heard from several people who have trouble using libraries built with ghc or Go). And in embedded contexts, you may need hard guarantees that, for example, no dynamic allocation of any sort is done, or that your program doesn't use the stack, etc. To the best of my knowledge, LISP has no facilities for either of these things.

(It does appear that SBCL lets you drop down to assembly, but again if you do that all the advantages of using LISP are gone. Anyway, what is the goal here? Do you really want to use a typed LISP with no lists, with large of featureless statically allocated memory, manually handling concurrency, mutability everywhere, inline assembly, and the inability to use even most of the C++ LISPy features because LISP has no support for using them without runtime costs? Writing a language without resorting to costly abstractions is hard and it was explicitly never a goal of LISP to be one. That's not to mention that in LISP it's nonobvious which features are costly and which ones aren't, so the abstraction it provides over hardware is only theoretical in this context).

In any LISP in a high performance context, you are always paying for things you don't use. You could probably argue that some of the above problems could be mitigated if everyone adopted SBCL as the standard, but unfortunately that's just the way it is in the real world. And while it is unfortunate, the fact is that even all the technical problems could be resolved (I have my doubts), it would be much more irritating to write such low-level systems code in LISP than in a language that wasn't so far removed from the workings of modern computer architecture.

I'm actually a big fan of Lisp and I've found it quite useful for a number of projects, but when you really need to do low-level programming, it is significantly easier in (modern) C++.


> If you write your own malloc and free like that ... you're responsible for safety as well (e.g. use after free bugs) which LISP will no longer protect you against.

That's right. There's no such thing as a free (no pun intended) lunch.

> (by the way, writing a performant, bugfree, concurrent malloc and free is not that easy :)),

It's pretty easy, actually:

(defvar free-list)

(defun initial-malloc (n) (dotimes (i n) (push (cons nil nil) free-list)))

(defun my-cons (car cdr) (setf (caar free-list) car (cdar free-list) cdr) (pop free-list))

(defun free (cons) (push cons free-list))

The reason it's hard to write a malloc for C is that it has to manage variable-length blocks.

> That's not to mention that LISP has to interact with C on a regular basis for things like system calls, and comes with a runtime that prevents it from playing nicely as an embedded library

No, that's just wrong. There's nothing about Lisp that prevents it from being implemented as an embedded library, e.g.:

http://en.wikipedia.org/wiki/Embeddable_Common_Lisp

> You could probably argue that some of the above problems could be mitigated if everyone adopted SBCL as the standard

No, I'm saying use the right tool for the job. If you really need every last bit of speed and you don't care about safety or engineering cost then by all means use C or C++. But if you want safety, reliability, and the sort of run-time dynamism described in the original article you're better off using Lisp or its progeny.

I'm also saying that if you want performance and you also want to use Lisp, you can. But at the end of the day there are fundamental tradeoffs in computing between speed, safety, dynamism, and engineering cost that no language will save you from.


> It's pretty easy, actually:

There are many contexts in which your malloc won't perform well, and many more where it will fall over in a concurrent environment (unless LISP uses atomic operations and locks by default, in which case you have much bigger performance problems to worry about). Concurrency without garbage collection is nontrivial, though I don't blame you for not thinking about it all that much if you rarely interact with such languages. It's great to learn about some of LISP's better-performing utilities (push and pop for example) but let's not get carried away. Also, allocating fixed-length blocks of memory is a perfectly reasonable allocation strategy in C.

> No, that's just wrong.

From the link, embeddable common LISP comes with a runtime, which makes it inappropriate in many contexts. I didn't say that LISP couldn't interact with C (obviously it can!) only that it's not particularly convenient. From the link, it supports inline C, which is great, but again you're not really using LISP at this point.

> No, I'm saying use the right tool for the job.

Oh, sure, I don't think we're disagreeing on that. Certainly most of the prominent Rust developers will immediately point you to a language like Haskell, Nimrod or Python if they will satisfy your usecase. It's just that some of your posts suggested that you think LISP could in principle be used in all the places C++ is used, which I don't think is necessarily true, and certainly it wouldn't be convenient to do so. For people who do have to use C++, I think these LISPy features are a nice way to make the experience more tolerable, and I think that's all the article was getting at.


> it will fall over in a concurrent environment

Good point (but you have that problem in any language). However, PUSH conses, so my code is wrong in that regard (you have to use a pre-allocated free vector, not a free list). So I concede the point: writing your own allocator in Lisp is not trivial. But it can be done.

> I don't think we're disagreeing on that.

Let's just leave it at that for now then.


My best guess about why people are still using C and C++ is this: there is a massive, valuable ecosystem of software written in these languages. It is hard to write software in one language that links with software written in other languages, especially where performance is a concern. The fact that all commonly used commercial OSes are both written in C and expose C APIs has kept C and C++ alive more than anything else.

Performance? Lisp can give you that, as can OCaml, Haskell, and other better languages. Real time system? There is a mountain of research on real-time garbage collection and on using HLLs for real-time systems. Operating system kernel? OSes were once written in Lisp, and OSes could conceivably be written in other HLLs.

Throw in a requirement to interoperate with a C library and suddenly things get ugly. Yeah, sure, you have an FFI, but debugging across a language barrier is difficult (I have had to do it, it is agony). Suddenly you need to worry about pinning objects so that the garbage collector won't move them while some C library expects them to stay still. In some cases your code basically becomes C but with the syntax of an HLL, and you start to wonder why you did not just write that routine in C to begin with (it would have made your life easier). Performance matters but your compiler needs to set up a trampoline so that your code can provide some kind of callback, and now that is a bottleneck that kills all that other optimization work. Then some joker writes some C++ code, and the rest of your week is spent writing wrapper functions because your FFI cannot deal with the name mangler.

At the end of the day there is no particular technical reason for C or C++ to remain so popular, and a big pile of technical reasons to stay away from such languages. C made a bit of sense in the 1970s when computers were small and the understanding of compilers and programming languages was less well developed. At this point C and C++ are a liability that we are all stuck with. Maybe some day the expense of sticking with C and C++ will outweigh the expense required to switch to better languages, but I am not holding my breath.


"Lisp can give you that performance" is, however, more an article of faith than actual reality.

Yes, it comes within a 3x-5x factor, on a good day. And for many applications, that's good enough. However, for either heavy-duty computational tasks or very responsive interactive tasks with a strong computational component, it just doesn't work.

If you have evidence to the contrary (for non-trivial examples), please share it. It's not my love for the exquisite language design that keeps me with C++ :)


Non-trivial examples include:

* High-frequency trading [1]

* 3D graphics and CAD systems by Symbolics

* Operating systems by Symbolics [2]

* Computer algebra [3]

* Supercomputing [4]

* Embedded, real-time forensic fingerprint systems (fingerprint analysis, embedded databases) [5]

* High-frequency auctions

* Performant compilers (most Lisp compilers)

* Perl-compatible regular expressions (sometimes 2x the speed of perl) [6]

And I can assure you, there are extremely many other things.

Generally, if you write absolutely correct and robust C++ code (that ensures there will never be buffer overruns, integer overflow, etc.), you'll see your code will slow down a lot. Lisp ensures these things don't happen (among many other things), and only when you tell Lisp that you are absolutely sure such things cannot happen, then your Lisp code can and often will be competitive with C or C++.

C++ has also benefitted from corporations funding the research and development of the compilers, whereas Lisp hasn't. So, as a result, the speed is partly an artifact of the implementation, not the language.

Lastly, as my own aside, the supposed "raw speed" of C (and lesser so C++) is no excuse to architect an entire system in it. There are hot paths in code that need speed, and perhaps attention should be given to those.

[1] http://www.hpcplatform.com/

[2] http://en.wikipedia.org/wiki/Genera_(operating_system)

[3] http://maxima.sourceforge.net/

[4] http://en.wikipedia.org/wiki/Connection_Machine

[5] http://arxiv.org/abs/1209.5625

[6] http://web.archive.org/web/20080624164217/http://weitz.de/cl...


Rust gives you guarantees at compile time that your program is correct and theoretical performance is above C/C++. I really hope people write OSes in it. Yes, you'll have to have a lot of inline assembly and "unsafe" code, but those things can be very closely looked at. The common glue code can be safe without buffer overruns.


For what it's worth, people have said the same thing about C: "As long as (many) people look closely enough at it, it'll be fine."


responsive interactive stuff could be written in Lisp on mich slower machines than what we have today. Naughty Dog wrote Playstation 1 games (like Crash Bandicoot) in a low-level Scheme inside a Lisp-base IDE.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: