Hacker Newsnew | past | comments | ask | show | jobs | submit | more vodou's commentslogin

C is not inherently unsafe. Sure, it hasn't "memory safety" as a feature. But there are loads of applications considered safe written in C. An experienced C programmer (with the help of tooling) can write safe C code. It is not impossible.


That would explain all the vulnerabilities in systemd and Linux. They just aren't experienced enough. Linus needs to get in touch with an expert.


I’m looking forward to your efforts in rewriting it in Rust


So is everyone else! Can't happen soon enough.


SQLite is the most stringently developed C code I'm aware of--the test suite maintains 100% branch coverage, routinely run through all of the sanitizers, and it is regularly fuzzed.

It still accumulates CVEs: https://www.sqlite.org/cves.html.


As I recall, one of the advantages of C over Rust is that the SQLite authors have the tooling to do 100% branch coverage testing of the compiled binaries in C. They tried Rust, but Rust inserts code branches they are unable to test.

The tradeoff then is the small number of bug causing the denial of service bugs listed, vs. not having 100% branch coverage. And they chose the latter.

(The authors also believe Rust isn't portable enough, not handles out-of-memory errors well enough - https://www.sqlite.org/whyc.html#why_isn_t_sqlite_coded_in_a... .)


Are you aware of a way to develop fault free code? Please share this knowledge then, please.


It's easy to develop fault-free code: just redefine all those faults as (undocumented) features!

That's not a helpful answer, but it's basically the same thing you're doing--redefining memory safety vulnerabilities that would be precluded entirely by writing in memory-safe languages as programmer faults.


He's aware of a way to develop memory-corruption-fault free code, obviously.


I guess "experienced C programmers" must be short supply although they have been writing C for years.


The effort is massive and the experience to do so at scale is very rare.


Are Git submodules really that bad? Really? Maybe it's me that haven't yet discovered their true dark side. They're not perfect, but works well for mid-sized projects at least.


They're better now than they were years ago, but yes they're still bad. And they're still second-class citizens for git commands.

Git has been adding more config settings to try to make it less painful, such as submodule.recurse, but that slows down most git-command significantly and the truth is most repos don't actually need to recurse - they just need the top level. So instead people end up writing aliases to try to handle it, which works ok, but it's brittle and people have to know to do it.

And if you have your repo on GitHub, they don't do some simple things that could have made it easier. For example, they could let the repo owner make the git-clone copy-to-clipboard thing have --recurse-submodules in the command you copy, so that cloning will do the right thing. But they don't. I understand why git itself doesn't want to make such things automatic, but GitHub could do it for at least private enterprise account repos.

BTW, I am _not_ advocating whatever this "Git X-Modules" thing is that the link points to. I have no idea what that is, never used it, and don't plan to.


They're fine, so long as you do not have to fork them. Very often we have to work on multiple submodules in parallel, to make that work we use the same name for all feature branches, then our CI and tools understand that these unrelated branches in unrelated repos actually belong together, but I'd prefer not to have to do that. At least it taught me why so many big companies use monorepos.


They are implemented in a maximally safe way, which generally means they leave files / folders / etc lying around when you least expect it which goes on to cause later annoyances, because doing otherwise could mean deleting data you didn't know would be deleted.

Say you're git, and someone has a submodule that goes from having:

    sub/
      .gitignore = a/ignored.txt
      a/
        tracked.txt
        ignored.txt
to this, when they `git checkout` some other newer or older or totally-separate-history branch (possibly several layers above this submodule):

    sub/
      b/
        tracked.txt
what do you do with sub/a/ignored.txt? What if the whole submodule was removed?

Git's answer to every single question like this is to either fail to perform the submodule version change, or leave a now-untracked-and-not-ignored sub/a/ignored.txt file in the directory, which leaves you with an un-clean checkout that it'll warn and complain and possibly conflict on.

It's highly unergonomic, but reliably avoids silently doing anything that might cause you to lose data. The awful ergonomics are what people generally hate about it. They'd rather it made some consistent decisions so it Just Worked™ in nearly all cases. But that's not Git-like.


That's because you haven't had all code messed up when someone pushed in the wrong order :-)


How can pushing in the wrong order mess up all code? The worst thing I can imagine is pushing the superproject with a subproject update, but not pushing the subproject. That would end up with an unbuildable commit in the superproject, as you can never check out its associated commit from the subproject, but I don't see how that's close to "messing up all code".



Notice the post is 10 years old and some of the complaints can get solved with the config `submodule.recurse true` https://stackoverflow.com/a/49427199


Good question, I was wondering the same. I have never really had an issue with submodules, and I am not sure what the killer feature that this adds over submodules.


They're much easier to consume now than they used to be but at the same time they're usually broken by default. Adding them to a project is not seamless or invisible. They do not "just work" with the same commands your team was previously using.

They're still pretty annoying to actively work in.


Maybe that is the problem - you should not use them for stuff that needs to be actively changed all the time.

We keep only DTOs in sub-modules so you really update it if you change property on some communication objects.

This way you also don't want them to be invisible - you really want to see and have people notice this was updated and actively choose what to do with it.


They have a reputation for being bad, which is enough for a company like the one in the OP to use as a marketing hook.


The most annoying problem is CI, i cannot even use a private git submodule in Github Actions without creating a personal token, passing it to secret and use some git command


Which seems more a problem with Github tooling, and less with git submodules.


For unimplemented stuff / placeholders I highly recommend exceptions instead of TODOs in any form (if suitable and your language of choice supports that). E.g. in Python:

  def my_fancy_function():
      raise NotImplementedError


But why only one or the other? The TODO is for your IDE to tap into so you can quickly and easily find outstanding work before the code is ready to be submitted as MR/PR (most IDE and even many simpler code editors have TODO/FIXME listing built in).

The exception is for runtime behaviour until the TODO is resolved: it's something you do mostly for yourself, not others. By the time others see your code, your TODOs have either been addressed, or they're acceptance criteria for follow-up work in your project tracker.

And that brings us to the third part that we _also_ need: the issue that the work the TODO talks about is for. Either as a checkbox task in a larger piece of work, or if it's big enough, taking up the entire issue. (because good project management means knowing what tasks are required/which work is outstanding, without opening a single file)


>... CAN is one of the least secure BUS's imaginable.

Can you elaborate on this?


No security or authentication. IIRC, any attached device has full control over the physical layer by design. I don't really see why this is a problem in a car with isolated buses for safety-critical components.


Yeah... that's like saying a car has a security risk because someone can go in the wheel well and slash the brake lines...


Yeah, that is a security risk

We choose to ignore it because it's rarely exploited, but by that logic, we should just ignore all zero-days.


Many newer vehicles contain systems, OEM and/or after-market, that are permanently connected to the internet via cellular modem. Other systems with insecure RF tech used for various gimmicks. Other systems that communicate with external and potentially malicious devices like chargers. Etc. Most of these systems have enough access to (in)directly destroy or booby trap the car. My car is able to receive ECU(!), firmware, software updates OTA from the manufacturer. These critical systems are just as "closed" or "isolated" as cloud-enabled "CCTV." Scary stuff.


A very simple tool for transferring files between two computers. Linux <-> Windows should just work, uPnP handling, etc - just a crazy simple way to transfer a file without setting up Dropbox, ftp servers or whatnot!


For files that are not private I use: https://btorrent.xyz/

People usually recommend https://file.pizza but it just doesn't work for me.

Now that I think about this it would be beautiful if there was something like stunnel but for NAT hole punching and what not. You would do:

  $ send-file FILE
  qk11c14pmq7maeod
  $
You would send the unique id (that could be also something that gives a QR code to scan or something like random words) over a separate channel. Then you would "receive-file qk11c14pmq7maeod" on the other end. It would be a simple binary that would be easy to embed and call through some pretty UI. Probably generation and scanning of those possible QR codes would have to be a part of the package. The problem would be who would finance servers needed for NAT traversal?


You're looking for croc or magic-wormhole.


It is strange that most of these tools seem to involve sending your data half way around the world. Guess local doesn't pay?

I just use good old SMB/network sharing - it's not much harder than setting up some program on two devices. Works between Windows/Linux/Android, fast and good enough.



Setup SCP on your windows box, everything else already "just works".


it's not a cli tool but https://snapdrop.net/ is local network file transfer


transfer.sh seems pretty close to what you’re looking for


Doesn't seem to solve the current problem, but definitely looks like a useful tool since I sometimes need to do timing diagrams as well. Good recommendation!


It's great! Lacks nothing.

On a related note: I friend of mine once said that an architect (the kind that design buildings) shouldn't be allowed to be older than eight. Still feels like a low-hanging fruit to make the world better.


It is not uncommon that older/senior developers hold an undeserved grudge against STL. STL was far from mature in the 90s, sometimes even rather bad (at least in Windows/Visual Studio). But that is simply not the case anymore. Modern STL is well-written and highly optimized IMHO.

Sure, it is not as "complete" as standard libraries in Python, Go, etc. It is rather a different kind of beast than those standard libraries, not as high-level, more building blocks oriented.


The allocation patterns that the STL requires / encourages, as well as the usability side (error messages) and the compile speeds, probably cannot improve unboundedly, given that they API has to stay the same.


I am not an STL fan, but allocation should't be a problem anymore with the advent of polymorphic allocators. At least in greenfield projects.


IME the main problem of the C++ stdlib isn't the implementation quality, but the interface design. It must be everything to everybody, but at the same time doesn't provide much control over the internal behaviour. And the interfaces can't be changed because of source code and binary compatibility requirements.

In many (most?) cases, writing your own stdlib alternatives still makes a lot of sense.


From my point of view the STL still suffers from a major flaw versus the compiler frameworks from the 1990's.

The whole team needs to care about secure code to turn on checked iteration in release builds, or write their own wrappers if portability to compilers without such support is a concern.


I've been working on the engine control unit for trucks. I think the fastest control loops were 100Hz (but that was only a few), fixed time intervals. Then, of course, you have the fuel injection that is controlled by its own processor, independent of the ECU scheduling.

Nowadays I work with satellite SW. Most of the control loops are pretty slow. The fastest ones are those controlling gyros and reaction wheels that run in 5 or 10Hz


PID control loops on brushless motor controllers typically operate in the 5kHz-50kHz range.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: