This is a very strange distribution of projects. There are projects like VLC, Filezilla, and 7-zip, next to often mission-critical pieces of software, like Kafka, Tomcat, and GlibC. I wonder what went into the decision process to include each of these libraries.
I also dislike the 'bug bounty platforms'. Why can't I simply report it upstream, and if accepted, claim my price? Each of the projects should have CVE protocols and procedures. The idea probably is to curb the zero-day vulnerability leaks, but I assume that if you're able to find a CVE, you're capable of finding a CVE procedure.
It seems like a rather logical distribution of projects if you consider the ratio of (installed base/developer interest). The projects on this list all have massive user bases, but few of them would garner much excitement on HN and they have relatively small developer communities.
Filezilla, Notepad++ and 7-zip aren't in themselves mission-critical, but they're hugely popular products. If you can pwn an office computer or a developer workstation, you've made a crucial step towards pwning something properly sensitive. Think about the IT guy in a typical medium-sized business or a government department - what are the first things he's going to install on his own work computer? After Microsoft Office and his browser, what programs will he most often use to open untrusted files from the internet? What happens to the department if a trojan on his machine starts feeding his passwords to the FSB or the PLA?
Of these, I'm pretty sure VLC is the most common software on end-user systems - and there are enough security advisories where a well-crafted video file can execute code with user privileges (like https://www.videolan.org/security/sa1801.html ), if you can automate that you have access to many personal computers in the EU
There's work going on (since 2016) to port the parsers to Rust[1]. I believe that a few already are written in Rust, and it'd be great if some Rust folks would help out with the effort.
GP isn't saying that VLC is unsafe, but rather that C (which VLC is written in) tends to be unsafe. Seeing as MPV is also written in C, it's absolutely the same in that regard.
> I wonder what went into the decision process to include each of these libraries.
The decision making process was a survey [0]. The two criteria used were (1) usage of software inside and outside the EU and (2) critical nature of the software for institutions and users.
Most probably these are tools commonly used by EU institutions which have records of bugs have causing them problems. The solution is to help fix those bugs by offering money. You are right though, I can't see how VLC can be as mission critical as Kafka.
Most police forces use VLC to view CCTV recordings and other multimedia evidence. It's an entirely logical choice of software, but it presents an obvious risk in the current climate. I would imagine that many intelligence services use VLC for similar purposes.
A nation-state adversary with a VLC RCE 0day could do some serious damage; if they also have an 0day for a popular model of CCTV DVR, they've got the keys to the kingdom. Those DVRs will never get patched and a nation-state adversary could dream up all sorts of ways to induce a police officer or an intelligence agent to play a media file, but at least we can harden VLC.
A friend of mine spent last Christmas debugging an issue in memcpy in glibc (on Intel 32-bit CPUs). Glibc is less well tested than I expected, and has ASM implementations of many functions for many CPUs, some of which are (obviously) less well supported than others.
IA32 is probably not getting all the focus from devs and users this days, still surprising whoever... Do you have a link to the issue, out of curiosity?
There was a bit of a scare around a 7-zip vulnerability earlier this year. [0] Turns out 7-zip is embedded inside a lot of other programs making those vulnerable too.
There's a distinction between your examples: the first ones are user tools, the latter are backend applications or libraries
My guess is that the main objective is to address user-visible bugs. While a glibc bug is certainly impactful, it is usually solvable before it gets too widespread.
(And as I much as it's "not the right way", higher level apps work around it before it is fixed)
>This is a very strange distribution of projects. There are projects like VLC, Filezilla, and 7-zip, next to often mission-critical pieces of software, like Kafka, Tomcat, and GlibC. I wonder what went into the decision process to include each of these libraries.
The EU (Brussels offices, etc) actually using them?
Sure, but there's a difference between "yea, we like 7-zip, let's put some money into it" and "yea, we use Tomcat to actually run our apps connected to the DB, might be nice if it got a bit of patching" (and funnily enough, some of the user-centric apps have more funding than some of the backend, mission-critical SW).
My evaluation of the benefit is completely opposite to yours. An exploitable bug in 7-zip has a much higher impact than a bug in Tomcat. Tomcat is running somewhere in the backend so an exploitable bug is not usually usable as a direct attack. A bug in 7-zip can suddenly create a bunch of ransomware attacks just by distributing malicious files.
We have a mountain of C code running in the wild parsing binary formats that's in real need of some fuzzing or ideally replacement by safer languages.
The thing is, "somewhere in the backend" is generally accessible from the internet, and vulnerable to attackers (so you need only a maliciously crafted packet, or something similar); whereas for 7-zip vulnerability, there must be: a) a maliciously crafted zip file, b) a user who wilfully opens it.
What's more, getting into one's backend servers/gaining some kind of access to DB, config files of the machine, etc. is, in my mind, just infinitely worse than gaining access to a computer of a person/uploading some ransomware/something similar.
We're just probably working with different SW, so we both see the thing that touches us the most as the problem... :))
> The thing is, "somewhere in the backend" is generally accessible from the internet
If this is the case you have much bigger problems that a bug bounty won't fix.
> in my mind, just infinitely worse than gaining access to a computer of a person/uploading some ransomware/something similar
That depends heavily on what the backend server is. There are plenty of databases where a hack is irrelevant because the data is public and there are backups. Meanwhile most people have poor backups and a hack can be incredibly damaging.
>we both see the thing that touches us the most as the problem
I think you're heavily discounting the risk that all these code bases in general usage pose. I've fuzzed C++ binary parsing code on just a laptop and was amazed at how many crashing bugs I was able to find in a short amount of time. Many of those were probably easily exploitable.
Gov agencies do accept zip files from general population. Send them something, they'll have to open it to respond to your request... Bam, you broke into a PC with sensitive system inside of a gov network.
> We have a mountain of C code running in the wild parsing binary formats that's in real need of some fuzzing or ideally replacement by safer languages.
In an ideal world that's what would happen but even if there where the will and the money it would take decades to replace all of this stuff in practice.
Sometimes when I'm feeling pessimistic I don't think we can ever truly secure (to a reasonable standard) anything.
>In an ideal world that's what would happen but even if there where the will and the money it would take decades to replace all of this stuff in practice.
In a previous discussion here someone pointed out you could actually compile C with hardening for out of bounds accesses for example. So maybe we need to isolate those input paths in programs and harden them.
>Sometimes when I'm feeling pessimistic I don't think we can ever truly secure (to a reasonable standard) anything.
I don't think we can either. In part it's just economics, the cost/value of the exploits is just too high for low-value targets. But it's yet another of the reasons I don't see how cryptocurrency ecosystems can really work. The security of the end-points is just way too low for me to trust that kind of thing.
For me, the biggest advantage of big country programs is the ease of reporting something. Not every software has a direct security report procedure documented.
For those who wish to get credit for them, those bug country sites help too.
I also dislike the 'bug bounty platforms'. Why can't I simply report it upstream, and if accepted, claim my price? Each of the projects should have CVE protocols and procedures. The idea probably is to curb the zero-day vulnerability leaks, but I assume that if you're able to find a CVE, you're capable of finding a CVE procedure.
Overall, though, this is great of course.