I think this is a really nice and important project, and at a cursory glance the design looks sane from a crypto perspective. But I question the basic UX design.
The attestation system seems like it's reproducing one of GPG's UX problems: you have these categories of attestation, and it's seemingly pretty sane as long as everyone uses the attestations right (and there are enough players in the ecosystem doing the work of attesting). GPG's trust levels have the same idea, but in reality people often just pull keys from some public keyserver and then assign them Full trust.
I also think the true meaning of attestations is a bit murky. The `spot-check` and `code-review` attestations are about source code, `reproduced` is about the build artifact, and `sec-audit` is somewhere in the middle (ideally both). But it seems like these attestations are always attached to artifacts, not source code? So spot-check and code-review are really only relevant if the build is reproducible (and has been attested as such), right? Since that's rarely-if-ever going to be the case in the real world, it seems like another reason the attestation system will likely be misused in practice.
Finally, although I in-theory admire the goal of allowing the user to define their own policy about trusting updates (defining how many attestations of which types are required/sufficient), my experience in software update systems tells me that real people absolutely won't do this. What will happen, if Gossamer sees good adoption, is that (1) there will be a standard trust config that gets distributed and reproduced, (2) everyone will use that config, and (3) it will be very permissive, because users don't want updates to be delayed/denied. The authors seem to envision a world where there is an ecosystem of independent security vendors out there doing reviews and publishing attestations, but don't really provide any compelling reason why that world will spring into existence.
>I also think the true meaning of attestations is a bit murky. The `spot-check` and `code-review` attestations are about source code, `reproduced` is about the build artifact, and `sec-audit` is somewhere in the middle (ideally both). But it seems like these attestations are always attached to artifacts, not source code? So spot-check and code-review are really only relevant if the build is reproducible (and has been attested as such), right? Since that's rarely-if-ever going to be the case in the real world, it seems like another reason the attestation system will likely be misused in practice.
This probably should be made clearer, but: Reproducible builds are necessary for the security of any such system. It's outlined in earlier blog posts.
Consequently, the inclusion of reproducible build verification is taken as a premise.
> The authors seem to envision a world where there is an ecosystem of independent security vendors out there doing reviews and publishing attestations, but don't really provide any compelling reason why that world will spring into existence.
That's true, and should probably be tackled in a future blog post.
>This probably should be made clearer, but: Reproducible builds are necessary for the security of any such system. It's outlined in earlier blog posts.
>Consequently, the inclusion of reproducible build verification is taken as a premise.
That is a bit funny. We have had issues making the PHP package reproducible on Arch Linux for years. And I believe upstream has rejected patches which embeds uname into built artifacts.
I'm a bit unsure about the premise considering upstream doesn't seem interested solving this?
In "What Happens if a Third Party is Compromised?" they mention you cannot revoke an attestation once it's committed. What happens if a security audit is conducted, concludes, but a vuln is found a few years down the line?
Security audits are not perfect. Is there a way to say "new information has come up, and this version is no longer secure?"
Even with perfect security audits such a feature might be useful, e.g. when you conducted a security audit, but new SPECTRE-like attack vectors are found.
It's not about "valid" or "no more valid". It's about context.
If you want to distrust a security vendor for greenlighting something that was found to be vulnerable the following week, you'd probably be in the clear.
If it was 6 years ago? Maybe don't count that against them; especially if it's a novel vulnerability that was discovered.
But also, if you're running 6 year old software, maybe update to a newer version of it.
I don't get it, who is going to pay for the time and energy required to audit everything? I presume the big package maintainers already have eyes on their stuff - symfony etc.
In theory we should always diff upstream package changes - in practice hardly anybody does. There's a trade-off between using OSS code and the cost of maintaining it yourself.
That said I guess this adds an option to the ecosystem that wasn't there, or possible before.
I do wonder if companies like snyk could somehow incorporate some sort of assurance model into their service. Hashing and lock files are a pretty common practice and perhaps for the bigger packages they could maintain a list of trusted hashes.
Imagine a scenario where an OSS developer with a package with millions of downloads a week is getting paid a few beers worth of money a month for all their work. And then a company like Snyk is building a verification service on top of this and making bank. Do you not think the money should be going to the OSS community?
These are two separate things, and it's perilous to conflate them.
The OSS developer is providing software that anyone can use under whatever license terms for free. How they monetize this is entirely their responsibility. Choosing a permissive license makes them generally indistinguishable from the developers who don't want to monetize their work at all. Solving the "how do we ensure they get paid?" problem is nontrivial, but certainly out of scope for this discussion.
The verification service is provided by a company to protect their customers from malicious changes to said OSS software. (Yes, even if they were deliberate changes by the original developer!)
In some sense, you could try to frame the verification services as somehow predatory, but that's like saying that safety inspectors are predatory to independent carpenters.
(I'm not happy with that last analogy, but it's the best I could come up with on the spot. Real-world analogies to software problems are always messy, so feel free to suggest a better one if you think of any.)
I'm afraid we'll have to agree to disagree on this one Scott, on a philosophical level.
We have companies like pullrequest.com monetizing code quality (currently only private but I presume they'll enter the OSS market at some point), and then other companies who are monetizing security audits. None of this money trickles down to the distributed teams of developers writing the software in the first place. This makes me uncomfortable - and you are right we do have to find ways to help with this.
I think I understand why it makes you uncomfortable.
I do think they're two separate problems, and must be solved independently. Left unsolved, what you're experiencing is mostly bad optics rather than a dependent nastiness. It's a bad look, and it leaves a bad taste in one's mouth.
That being said, we both agree it's worth solving. However, I'm not an economist, by any measure, so I don't have any insight into what a solution looks like.
> I don't get it, who is going to pay for the time and energy required to audit everything?
Not everything has to be audited. That's why there's different levels of attestations.
In terms of economic incentives: If you're a company bit by one of the recent supply chain issues (colors.js, etc.), you might be able to justify hiring a security vendor to audit the code that your company depends on. This would provide a net-positive benefit to the entire ecosystem, even if it's only a small set of audited code.
Maybe one day, we can even make this an expectation of large players. But that's a discussion for down the road.
On the opposite end of things, you have independent security consultants that want to establish their reputation so they can get paid engagements with software companies.
One avenue available to everyone is review open source software, report vulnerabilities to their maintainers. This can be thankless or even traumatic; i.e. https://github.com/opencart/opencart/pull/1594
Gossamer would open an alternative approach: Hang your shingle out by publishing negative (vote-against) attestations of vulnerable versions of open source software and positive attestations (e.g. code-review) of the versions that mitigated the issues they disclosed. Anti-malware vendors (e.g. WordFence) could even issue weaker positive assertions (spot-check) for WordPress plugin/theme updates after vetting the known-good releases. Security companies depend heavily on their ability to earn trust to thrive, and that's a hard market to break into; this offers another way in.
In short, the economic challenges you're imagining aren't the ones that this project will face. (Although, there will assuredly be challenges.)
Companies acting in their own self-interest can be leveraged to cover the hot paths of the universal dependency graph, and security up-starts can be leveraged to cover their blind spots. Given enough time, the ecosystem will eventually reach some sort of equilibrium, and many new opportunities will be made in the process.
> I presume the big package maintainers already have eyes on their stuff - symfony etc.
Just because they have eyes on their stuff doesn't mean that those eyes have the necessary domain-specific expertise to identify problems. If it weren't for Paragon (paragonie-security on Github) and their associates in the security industry, the issues identified in the earlier versions of the module would likely have persisted and been shipped.
> Hang your shingle out by publishing negative (vote-against) attestations of vulnerable versions of open source software and positive attestations (e.g. code-review) of the versions that mitigated the issues they disclosed.
I don't know. I have sat here rewriting this comment now about 4-5 times. I guess we'll see how it plays out but I don't share your optimism for the economic incentives being there for people to undertake this kind of work.
I do applaud you for this work though, and the efforts. The work you and Paragon have performed as part of the PHP ecosystem has been exemplary.
> Hang your shingle out by publishing negative (vote-against) attestations of vulnerable versions of open source software and positive attestations (e.g. code-review) of the versions that mitigated the issues they disclosed.
So you're imagining that a bunch of people trying to break into security work will do work for free in hopes of gaining potential employers'/clients' trust?
And you're imagining that this ecosystem of attestations will be seeded by a bunch of people looking to gain the community's trust?
So who audits the auditors? And how long do you expect it to take to get a critical mass of people reviewing code who have gained the community's trust to be reviewing enough packages to solve open source supply chain security?
> So you're imagining that a bunch of people trying to break into security work will do work for free in hopes of gaining potential employers'/clients' trust?
We're talking about open source software. People are already doing this sort of free work. You run into them when you start a bug bounty program, or once you've created at least one open source package with a nontrivial userbase.
The way you're wording this sounds precariously like I'm creating some barrier to entry to extract free labor out of people. Quite the opposite: I'm suggesting a mechanism for taking the free work people are already doing in the open source security space, and using it to build rapport with the market a security researcher is trying to break into.
If you're wondering how I would know about the motivations about someone trying to build a customer base out of free labor performed for open source software, take a look at... virtually everything publicly shared on paragonie.com. I'm speaking from experience. ;)
> And you're imagining that this ecosystem of attestations will be seeded by a bunch of people looking to gain the community's trust?
Not just people. Companies too! (I think most of us view them as separate things still?)
> So who audits the auditors?
The same people who make these kinds of decisions today, albeit far less formally than what I'm envisioning.
For the PHP ecosystem, you have the big players (WordPress, Drupal, Joomla, Magento) and frameworks (CodeIgniter, Symfony, Laravel, etc.) with dedicated security teams that field vulnerability reports from the larger community.
Beyond them, you have this large, distributed, ad hoc emergent network of security experts that have a loose consensus on whether or not a self-proclaimed security expert is credible. It's messy and uncoordinated and decentralized, and very imperfect.
> And how long do you expect it to take to get a critical mass of people reviewing code who have gained the community's trust to be reviewing enough packages to solve open source supply chain security?
I don't have a time estimate on hand, due to how this will need to unfold. I don't expect to have "[solved] open source supply chain security" in any immediate sense. Going from "improved state of affairs" to "solved problem" is a long tail.
Marginally, improving the security of the open source supply chain is trivial: Any effort expended is more than is currently being done today. That's the dx part of the equation.
What I predict is as follows:
1. Highly impactful codebases (i.e. a dependency of lots of projects), which are in the hot path for many dependency graphs, will end up being covered by third-party reviewers.
2. A lot of niche codebases will be covered because of community interest or due to extant social relationships.
3. A large swath of what remains will remain uncovered by third-party review despite being open source.
Today, the software in category 3 is an unknown unknown. With Gossamer, it will become a known unknown. This is a meaningful step towards "solved problem", even if it doesn't prima facie solve it immediately.
This sounds brilliant and I see no immediate reason why something like this shouldn't be useful for most software ecosystems.
Also, in addition to the pure security perspective of this I also have a feeling that it might become a useful piece of the puzzle to solve open source funding.
Generally speaking, Transparency Logs for securing software distribution has been a research topic since around 2015, I also wrote my master thesis on the subject.
Sigstore is a Transparency Log intended for provenance and software artifacts which has support for a few different build artifacts. The container ecosystems also appears to be embracing it.
Cool practical example is pacman-bintrans from kpcyrd that throws Arch Linux packages on sigstore and (optionally) checks each package for being reproducible before installation.
https://defuse.ca/triangle-of-secure-code-delivery.htm was published in July 2014, which included Userbase Consistency Verification as a requirement... so I think that's when the use of transparency logs in solving this problem was earliest recorded.
But I'm no internet historian, so I may have missed something.
I'm no historian either. I believe there is multiple overlapping efforts that has been cropping up over the years without necessarily being aware of each other.
It would be interesting to collect the published research and blogs and get an overview.
I think this type of attestation gets us part of the way there, however, the solution needs to be a bit more generalized to cover all the threats. At TestifySec we are working on a open source pluggable attestation framework with a rego policy engine for verification.
A review attestation (as proposed in this article) is pretty interesting and is an attestor I will probably add to our project.
The attestation system seems like it's reproducing one of GPG's UX problems: you have these categories of attestation, and it's seemingly pretty sane as long as everyone uses the attestations right (and there are enough players in the ecosystem doing the work of attesting). GPG's trust levels have the same idea, but in reality people often just pull keys from some public keyserver and then assign them Full trust.
I also think the true meaning of attestations is a bit murky. The `spot-check` and `code-review` attestations are about source code, `reproduced` is about the build artifact, and `sec-audit` is somewhere in the middle (ideally both). But it seems like these attestations are always attached to artifacts, not source code? So spot-check and code-review are really only relevant if the build is reproducible (and has been attested as such), right? Since that's rarely-if-ever going to be the case in the real world, it seems like another reason the attestation system will likely be misused in practice.
Finally, although I in-theory admire the goal of allowing the user to define their own policy about trusting updates (defining how many attestations of which types are required/sufficient), my experience in software update systems tells me that real people absolutely won't do this. What will happen, if Gossamer sees good adoption, is that (1) there will be a standard trust config that gets distributed and reproduced, (2) everyone will use that config, and (3) it will be very permissive, because users don't want updates to be delayed/denied. The authors seem to envision a world where there is an ecosystem of independent security vendors out there doing reviews and publishing attestations, but don't really provide any compelling reason why that world will spring into existence.