Hacker News new | past | comments | ask | show | jobs | submit login

I work on a project that does a lot of systems programming in C#. It's awful. Myself and several coworkers every now and then ask, "why wasn't this done in C or C++ in the first place?" So much jumping through hoops avoiding the GC kicking in, or the GC moving memory around, doing P/Invokes, translating things between managed and native, and so on... It's not fun at all.



It's definitely not great right now -- that's why we're trying to make it better. :)

I would definitely still advocate using the right tool for the job, though. If the vast majority of your application would be best written in C++ or Rust (something without managed memory) I would just go ahead and do that.

A lot of people, however, have cross-layer applications where a substantial amount of the code has strict performance requirements, but much or most of the rest of the code has looser requirements.


Yup, same experience for us. We went back and forth between workstation and server mode gc. If you have a process that maintains things like leases/heartbeats in a low latency setting, C# doesn't seem like a good idea. Wonder how well Go works in this scenario considering it was purpose built for this.


Go does some of the same stuff Duffy recommends for C# in this post, like using an AOT compiler and stack-allocatable structs.

Some things about Go may make that style of programming feel more natural. For example, structs are the default and syscalls look like other function calls. The stdlib might be friendly to this style (for example, widespread io.Reader/Writer lets a lot of stuff stream data through a reusable buffer rather than allocate big blobs) but I don't know enough to usefully compare it with the .NET libs/BCL.

Or C# could be better for you. It has a lot of work behind it, including in the collector. Go's collector is now decent at keeping most work in the background but isn't generational or compacting as the CLR collector can be. And using a new language is always weird; you never start out as good with the new as you were with the old. The CLR's SustainedLowLatency collector mode, which tries to defer compaction as long as it can at the cost of RAM footprint, is the one that sounds most like Go's, FWIW.

It all depends so much on what kind of deadlines your app has, how much memory pressure, what else you're getting/paying for in C# land. It's always tricky to grok a different ecosystem. The best ideas I can think of are to look for something existing in Go that seems kind of like what you want to do (like if you're implementing some kind of queue, look at NATS or nsq or such), or just build the smallest project that seems like a reasonable test.


Why Go, it has GC, too. I'd look at Rust.



Oberon the OS was cool, but Oberon the language is sort of too 1990s. (But I'd definitely take Modula-2 for low-level stuff instead of C any day, as I did in early 1990s.)


I agree, hence why after my initial interest in Go I eventually switched focus to other languages.

However, it doesn't change the fact that it allows for lots of low level stuff in the similar vein as Oberon, which is why I happen to take Go's side, even if I rather use other more expressive programming languages.

And to be faire, Niklaus Wirth latest language changes (Oberon-07) are even more minimalist than Go's.


Go is also a GC language.


It has pretty low pause times by the GC.


Yep, around 2 milliseconds for most programs

https://sli.mg/1RmNsB


Two-milliseconds is an eternity in kernel time. That's wired round-trip time between two GigE endpoints on Linux's mediocre TCP/IP stack.

Now imagine you stacked 2ms GC pauses into that level of the system. That would be a barely serviceable kernel. Forget any real-time facilities.


OSes written in GC enabled systems programming languages always allowed for controlling the GC behaviour.

So you can have a GC free TCP/IP stack, while enjoying the GC comfort in areas where the 2ms pause aren't an issue.


Or maybe even not completely GC-free [1]. What might be especially helpful is a good JIT that could reoptimize the code on-the-fly, when data patterns changes. Maybe performance level of 'data-guided optimization' provided by (controllable) GC and state-of-the-art JIT could beat down traditional approach someday.

http://lukego.github.io/blog/2013/01/03/snabb-switchs-luajit...


> Now imagine you stacked 2ms GC pauses into that level of the system. That would be a barely serviceable kernel. Forget any real-time facilities.

Real-time just means bounded latency. If 2ms was a hard upper bound, that's hard realtime. If it's ~90% bounded by 2ms with a small variance, that's soft realtime.


Dlang is a much better fit if you want some high level-ish conveniences (e.g. opt-out GC, lazy evaluation) in a systems programming languages without too much trouble.


Same here. My former boss demanded C# for all projects (he's more of a web guy). I protested, but lost. Now we have a ton of hardware interfaces and image processing/analysis routines which are unnecessarily complicated, difficult to maintain, and often slower than they should be.


Slower I can understand, but I don't see how a C# interface could possibly be inherently more complicated than a C or C++ interface for something like an image library, at least to the extent you're implying.

Even hardware interfacing via memory mapped addresses would just need a small shim and types that are byte compatible with C structs you can call via P/invoke, isn't particularly complicated.

Can you give a specific example of what you're referring to?


Poor wording in my part. It's not that the interfaces are more complicated, it's the implementation of those interfaces. Some of the hardware pieces come along with native SDK's (those which don't support e.g. a serial interface), so there's a lot of interop going on.


Right, so the complication is just duplicating the interface in C# for interop, which obviously isn't needed if you just use the SDK language. Still, this just hides the complexities of using that language, like memory safety and garbage collection, so it seems hard to definitively state that it's more complicated than it otherwise would be.

What sort of performance issues do you see? Do you mean the p/invoke/marshalling costs?


The complications are around all of the native interop. It's just a lot of PInvoke and type wrangling scattered about for no good reason.

The performance issues were in the image processing and analysis areas. Image analysis doesn't really lend itself to bounds checking, non-deterministic memory usage, little control over heap allocated memory, etc. Also, I lose access to some of the most powerful imaging libraries out there.

I can work around a lot of it, but why should I have to? Should have used the right tool from the start.


You can circumvent the bounds checking via unsafe code, and avoid heap allocation by using structs. Not sure what non-deterministic memory usage means.

You haven't specified what the right tool is. I think classifying C/C++ as the right tool is contentious too for the reasons I outline. The "type wrangling" isn't there for no good reason, the reasons are quite clear: to maintain memory safety and benefit from automatic memory management. There's also the possibility that you're making it more complicated than it needs to be.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: