Hacker Newsnew | past | comments | ask | show | jobs | submit | vorgol's commentslogin

"Get your head examined. And get the fuck out of here with this shit."

-- Kent Overstreet, bcachefs maintainer: https://lore.kernel.org/all/citv2v6f33hoidq75xd2spaqxf7nl5wb...


Sadly my nonacore phone's soul is too heavy (or light)? to visit that site.


Worked on my OnePlus 8T running LineageOS with no gapps and using Fennec F-Droid for the browser. Could you have a useragent or extension[1] it didn't like? I know that on my PC, GNOME's Gitlab specifically required me to change my UA to get it to load while every other Anubis config I've come across has been fine. I do notice LKML's is using difficulty 4, though, which I think is higher than I usually see.

[1] Due to an unrelated bug in latest Fennec, I currently have all my extensions disabled or else all pages stop loading at all. Normally use uBlock Origin, Tampermonkey, etc.



Thanks for that link, I dig this kind of kernel drama


When I was learning Linux back in the day, one of the most beneficial thing I did was to go though the VIM tutorial and learn to use it properly. I'm no master at it but oh boy that time spent has paid dividends down the line.


The Neovim Tutor is more comprehensive than the old school Vim Tutor. I recommend that people who want to get fast with the key commands go through it repeatedly until most of it becomes part of their muscle memory. When this happens, the learning curve starts looking a lot more approachable and less daunting.


OSs need to stop letting applications have a free reign of all the files on the file system by default. Some apps come with apparmor/selinux profiles and firejail is also a solution. But the UX needs to change.


This is a huge issue and it's the result of many legacy decisions on the desktop that were made 30+ years ago. Newer operating systems for mobile like iOS really get this right by sandboxing each app and requiring explicit permission from the user for various privileges.

There are solutions on the desktop like Qubes (but it uses virtualization and is slow, also very complex for the average user). There are also user-space solutions like Firejail, bubblewrap, AppArmor, which all have their own quirks and varying levels of compatibility and support. You also have things like OpenSnitch which are helpful only for isolating networking capabilities of programs. One problem is that most users don't want to spend days configuring the capabilities for each program on their system. So any such solution needs profiles for common apps which are constantly maintained and updated.

I'm somewhat surprised that the current state of the world on the desktop is just _so_ bad, but I think the problem at its core is very hard and the financial incentives to solve it are not there.


If you are on Linux, I'm writing a little tool to securely isolate projects from eachother with podman: https://github.com/evertheylen/probox. The UX is an important aspect which I've spent quite some time on.

I use it all the time, but I'm still looking for people to review its security.


Containers should not be used as a security mechanism.


I agree with you that VMs would provide better isolation. But I do think containers (or other kernel techniques like SELinux) can still provide quite decent isolation with a very limited performance/ease-of-use cost. Much better than nothing I'd say?


I would kinda disagree with this. The whole 'better than nothing' is what gave a huge chunk of people a false sense of security wrt containers to begin with. The reality is that there is no singular create_container(2). Much of the 'security' is left up to the runtime of choice and the various flags they choose or don't choose to enable. Others in this thread have already mentioned both bubblewrap and podman. The fact that the underlying functionality is exposed very differently through different 'runtimes' with numerous optional flags and such is what leads to all sorts of issues because there simply was no thought to designing these things with security in mind. (We just saw CVE-2025-9074 last week). This is very different than something like the v8 sandbox or gvisor which was designed with certain properties.


It’s a gradient. An airgapped physical device is better than a VM. A VM is better than podman. Podman is better than nothing.

A locked door is better than an unlocked one, even if it gives its owner a false sense of security. There is still non-zero utility there.


This is also my impression. Containers aren't full-proof. There are ways to escape from them I guess? But surely it's more secure practically than not using them? Your project looks interesting I will take a look.


Google did a good job with securing files on Android.


Learn to use bubblewrap with small chroot.


Bubblewrap has refused to fix known security issues in its codebase and shouldn't be used.


Which operating system lets an application have "free reign of all the files on the file system by default"? Neither Linux, nor any BSD, nor MacOS, nor Windows does. For any of those I'd have to do something deliberately unsafe such as running it as a privileged account (which is not the "default").


I would argue the distinction between my own user and root is not meaningful when they say "all files by default". As my own user, it can still access everything I can on a daily basis which is likely everything of importance. Sure it can't replace the sudo binary or something like that, but it doesn't matter because it's already too late. Why when I download and run Firefox can it access every file my user can access, by default. Why couldn't it work a little closer to Android with an option for the user to open up more access. I think this is what they were getting at.


Flatpak allows you to limit and sandbox applications, including files inside your home directory.

It's much like an Android application, except it can feel a little kludgy because not every application seems to realize it's sandboxed. If you click save, silent failure because it didn't have write access there isn't very user friendly.


I'm not saying user files aren't important. What I am saying is the original poster was being hyperbolic and, while you say it's not important for your case, it is a meaningful distinction. In fact, that's why those operating systems do not allow that.


Because it will become unpractical. It’s like saying your SO shouldn’t have access to your bedroom, or the maid should only have access to a single room. Instead what you do is having trusted people and put everything important in a safe.

In my case, I either use apt (pipx for yt-dlp), or use a VM.


I don't agree that the only options are "give it almost everything" or "give it nothing and now it's a huge pain in the arse". Which seems to be what you implied. I do think there are better middle grounds where an app almost always works out of the box but also can't access almost everything on the system. There are also UI changes that can help deal with this like the Android security prompts do.


How many software installation instructions require "sudo"? It seems to me that it's many more than should be necessary. And then the installer can do anything.

As an administrator, I'm constantly being asked by developers for sudo permission so they can "install dependencies" and my first answer is "install it in your home directory" sure it's a bit more complexity to set up your PATH and LD_LIBRARY_PATH but you're earning a six-figure salary, figure it out.


Even with sudo, macOS blocks access to some User-accessible locations:

% sudo ls ~/Pictures/Photos\ Library.photoslibrary

Password:

ls: /Users/n1503463/Pictures/Photos Library.photoslibrary: Operation not permitted


Even just having access to all the files that the user has access to is really too much.


https://www.xkcd.com/1200/

All except macOS let anything running as your uid read and write all of your user’s files.

This is how ransomware works.


You forgot the actually secure option: https://qubes-os.org


The multi-user security paradigm of Unix just isn't enough anymore in today's single-user, running untrusted apps world.


I was going to recommend that exact podcast episode but you beat me to it. Totally worth listening, especially if you're interested in software bugs.

Another interesting fact mentioned in the podcast is that the earlier (manually operated) version of the machine did have the same fault. But it also had a failsafe fuse that blew so the fault never materialized. Excellent demonstration of the Swiss Cheese Model: https://en.wikipedia.org/wiki/Swiss_cheese_model


>> the real failure in the story of the Therac-25 from my understanding, is that it took far too long for incidents to be reported, investigated and fixed.

> the earlier (manually operated) version of the machine did have the same fault. But it also had a failsafe fuse that blew so the fault never materialized.

#1 virtue of electromechanical failsafes is that their conception, design, implementation, and failure modes tend to be orthogonal to those of the software. One of the biggest shortcomings of Swiss Cheese safety thinking is that you too-often end up using "neighbor slices from the same wheel of cheese".

#2 virtue of electromechanical failsafes is that running into them (the fuse blew, or whatever) is usually more difficult for humans to ignore. Or at least it's easier to create processes and do training that actually gets the errors reported up the chain. (Compared to software - where the worker bees all know you gotta "ignore, click 'OK', retry, reboot" all the time, if you actually want to get anything done):

But, sadly, electromechanical failsafes are far more expensive then "we'll just add some code to check that" optimism. And PHB's all know that picking up nickles in front of the steamroller is how you get to the C-suite.


When I worked at an industrial integrator, we had a hard requirement for hard-wired e-stop circuits run by safety relays separate from the PLC. Sometimes we had to deal with dangerous OEM equipment that had software interlocks, and the solution was usually just to power the entire offending device down when someone hit an e-stop or opened a guarding panel.

About a decade ago a rep from Videojet straight up lied to us about their 30W CO2 marking laser having a hardware interlock. We found out when - in true Therac-25 fashion - the laser kept triggering despite the external e-stop being active due to a bug in their HMI touch panel. No one noticed until it eventually burned through the lens cap. In reality the interlock was a separate kit, and they left it out to reduce the cost for their bid to the customer. That whole incident really soured my opinion of them and reminded me of just how bad software "safety" can get.


To be fair, reps don't really know anything deep about their product. They just parrot what they are told (or they wing it, which, i guess, can be lying). They are pushed to sell, and they will say anything to sell.


Their competitor (and our preferred vendor at the time) was always forthright with us about the capabilities of their lasers.

> And PHB's all know that picking up nickles in front of the steamroller is how you get to the C-suite.

Blaming it on PHB's is a mistake. There were no engineering classes in my degree program about failsafe design. I've known too many engineers who were insulted by my insinuations that their design had unacceptable failure modes. They thought they could write software that couldn't possibly fail. They'd also tell me that they could safely recover and continue executing a crashed program.

This is why I never, ever trust software switches to disable a microphone, software switches that disable disk writes, etc. The world is full of software bugs that enable overriding of their soft protections.

BTW, this is why airliners, despite their advanced computerized cockpit, still have an old fashioned turn-and-bank indicator that is independent of all that software.


Failsafe design is actually really fun when you start looking at all the scenarios and such.

But one key component is that IF a failsafe is triggered, it needs to be investigated as if it killed someone; because it should NEVER have triggered.

Without that part of the cycle, eventually the failsafe is removed or bypassed or otherwise ineffective, and the next incident will get you.


Most airplane crashes are due to multiple failures. The accidents are investigated, and each failure is addressed and fixed.

The result is incredible safety.


People know about that; what they forget about is that any failure is noted and repaired (or deemed serviceable until repair).

Airplane reliability is from lots of failure analysis and work but also comprehensive maintenance plans and procedures.


Don’t worry we are poised to re learn all these lessons once again with our fancy new agentic generative ai systems.

The mechanical interlock essentially functioned as a limit outside of the control system. So you should build an ai system the same way- enforcing restrictions on the security agency from outside the control of the ai itself. Of course that doesn’t happen and devs naively trust that the ai can make its own security decisions.

Another lesson from that era we are re learning- in-band signaling. Our 2025 version of the “blue box” is in full swing. Prompt injection is just a side effect of the fact that there is no out of band instruction mechanism for llms.

Good news is - it’s not hard to learn the new technology when it’s just a matter of rediscovering the same security issues with a new name!


All scandinavic countries have stopped shipment to USA due to this (except gifts valued < $100)


I've acquired so many tools like this, and I don't think I've ever looked at them at though I regretted the purchase. Many have enabled me to fix and make stuff down the line.


A bit like "There used to be a lot of corruption is politics. There still is, but there also used to be."


For me it's a combination of the nature of task, where I'm at and the nature and duration of the interruption. But usually an interruption causes a large amount of penalty points.


I apologize for the side question, but what are people using for Golang SQlite cipher/encryption combination?


We can argue if it's a feature feature creep and if it's the right choice of formatter.

But let's appreciate that this will increase the use of formatters astronomically (or part thereof), which is an excellent thing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: