If we're flagkilling stories with workable paywall bypasses, you should ping hn@ycombinator.com (don't assume they see the thread), and I'd expect them to unkill it.
I think you'd have to ask Maxim. My take is he felt experimental features should not get CVEs, which isn't how the program works. But that's just my take - I'm the primary representative for F5 to the CVE program and on the F5 SIRT, we handle our vuln disclosures.
I'm inclined to agree with your decision to create and publish CVEs for these, honestly. You were shipping code with a now-known vulnerability in it, even if it wasn't compiled in by default.
Incorrect. Features available to users still require a minimum, standard level of support. This is like the deceptive misnomer of staging and test environments provided to internal users used no differently than production in all but name.
If the feature is in the code that's downloaded, regardless of whether or not the build process enables it by default, the code is definitely being shipped.
This is an insane standard and attempting to adhere to it would mean that the CVE database, which is already mostly full of useless, irrelevant garbage, is now just the bug tracker for _every single open source project in the world_.
Why is it insane? The CVE goal was to track vulnerabilities that customers could be exposed to. It is used…in public, released versions. Why wouldn’t it be tracked?
It's in the published source code, as a usable feature, just flagged as experimental and not compiled by default. It's not like this is some random development branch. It's there, to be used en route to being stable. People will have downloaded a release tagged version of the source code, compiled that feature in and used it.
By what definition is that not shipped?
> I am actually completely shocked this needs to be explained. Legitimate insanity.
I've had an optional experimental feature marked with a CVE. It's not a big deal as it just lets folks know that they should upgrade if they are using that experimental feature in the affected versions.
Where did you get this info? It might be the feature is actively being worked on and the DoS is a known issue which would be fixed before merge. Lot of projects have contrib folder for random scripts and other things which wouldn't get merged before some review but users are free to run the script if they want to. Experimental compile time build flags are experimental by definition.
You're all also missing the fact that the vuln is also in the NGINX+ commercial product, not just OSS. Which has a different release model.
Being the same code it'd be darn strange to have the CVE for one and not the other. We did ask ourselves that question and quickly concluded it made no sense.
"made no sense" from a narrow, CVE announcement perspective, but Maxim disagrees from another perspective:
> [F5] decided to interfere with security policy nginx
> uses for years, ignoring both the policy and developers’ position.
>
> That’s quite understandable: they own the project, and can do
> anything with it, including doing marketing-motivated actions,
> ignoring developers position and community. Still, this
> contradicts our agreement. And, more importantly, I no longer able
> to control which changes are made in nginx within F5, and no longer
> see nginx as a free and open source project developed and
> maintained for the public good.
I'm not sure what "contradicts our agreement" means but the simple interpretation is that he feels that F5 have become too dictatorial to the open source project.
The whole drama seems very short-sighted from F5's perspective. Maxim was working for you for free for years and you couldn't find some middle ground? I imagine there could have been some page on the free nginx project that listed CVEs that are in the enterprise product but that are not considered CVEs for the open source project given its stated policy of not creating CVEs for experimental features, or something like that.
To nuke the main developer, cause this rift in the community, and create a fork seems like a great microcosm of the general tendency of security leads to wield uncompromising power. I get it. Security is important. But security isn't everything and these little fiefdoms that security leads build up are bureaucratic and annoying.
I hope you understand that these uncompromising policies actually reduce security in the end because 10X developers like Maxim will start to tend to avoid the security team and, in the worst case, hide stuff from their security team. I've seen this play out over and over in large corporations. In that sense, the F5 security team is no different.
But there should be a collaborative, two-way process between security and development. I'm sure security leads will say that they have that, but that's not what I find. Ultimately, if there's an escalation, executives will side with the security lead, so it is a de facto dictatorship even if security leads will tend to avoid the nuclear option. But when you take the nuclear option, as you did in this case, don't be surprised by the consequences.
OK - I need to make very clear that I'm speaking for myself and NOT F5, OK? OK.
Ask yourself why this matters? What is the big deal about having a CVE assigned? A CVE is just a unique identifier for a vulnerability so that everyone can refer to the same thing. It helps get word out to users who might be impacted, and we know there are sites using this feature in production - experimental or not. This wasn't dictating what could or could not go into the code - my understanding was the vuln wasn't even in his code, but from another contributor. So, honestly, how does issuing the CVEs impact his work, at all?
That's what I, personally, don't understand. At a functional level, this really has no impact on his work or him personally. This is just documentation of an existing issue and a fix which had to be made, and was being made, CVE or no CVE. And this is worth a fork?
What you're suggesting is the best thing to do is to allow one developer to dictate what should or should not be disclosed to the user base, based on their personal feelings and not an analysis of the impact of that vulnerability on said user base? And if they're inflexible in their view and no compromise can be reached then that's OK?
Sometimes there's just no good compromise to be reached and you end up with one person on one side, and a lot of other people on the other, and if that one person just refuses to budge then it is what it is. Rational people can agree to disagree. In my career there have been many times when I have disagreed with a decision, and I could either make peace with it or I could polish my resume. To me it seems a drastic step to take over something as frankly innocuous as assigning a CVE to an acknowledged vulnerability. Clearly he felt differently, and strongly, on the matter. Maybe he is just very strongly anti-CVE in general, or maybe he'd been feeling the itch to control his own destiny and this was just the spur it took to make the move.
His reasons are his own, and maybe he'll share more in time. I'm comfortable with my personal stance in the matter and the recommendations I made; they conform with my personal and professional morals and ethics. I'm sorry it came to this, but I would not change my recommendation in hindsight as I still feel we did the right thing.
Only time will tell what the results of that are. I think the world is big enough that it doesn't have to be a zero sum game.
I guess a vulnerability doesn’t count unless it’s default lol. Just don’t make it default and you never have any responsibility nor does those who use it or use a vendor version that has added it in their product.
>I guess a vulnerability doesn’t count unless it’s default lol.
It's still being tested. It's not complete. It's not released. It's not in the distribution. The amount of people that have this feature in the binary AND enabled is less than the amount of people that agree that this should be a CVE.
CVE's are not for tracking bugs in unfinished features.
It IS in the code that anyone can compile to use or integrate in projects as is the OSS way. Splitting hairs because it’s not in the default binary is absurd. Guess all the extra FFMPEG compilation flags and such shouldn’t count either.
(not explicitly asking you, MZMegaZone) Does anyone understand why a disagreement about this would be worth the extra work in forking the project?
I'm not very familiar with the implications, so it seems like a relatively fine hair to split- as though the trouble of dealing with these as CSV would be less than the extra work of forking.
It probably wasn't. There's likely something else going on. Either Dounin had already decided to fork for other reasons, and the timing was coincidental, or there were a lot of reasons building up, and this was the final straw.
Or he's just a very strange man, and for some reason this pair of CVEs was oddly that important to him.
This is also what I was wondering. The demo is showing recording a web-browser, and I'm wondering if that is all it is doing. If so, wouldn't that mean creating a browser plug-in would make this possible on any platform?
I also don't understand the chatGPT component, and what it is trying to tell him. Though I'm sure if you just threw the URL and the screenshot to chatGPT, you could ask it questions about that source.
I'm not sure how useful this is tbh, or how I would use it. I'm not saying it isn't useful, just that I'm not sure how I would use it, or why it is useful.
> The demo is showing recording a web-browser
He said it's not recording but taking a screenshot every 2 seconds and I assume it's not just for a browser but all text on the desktop.
> I also don't understand the chatGPT component
You give it context from the "recording" and it answers questions you give it with that context info.
Full disclosure, I haven’t tested it on Intel, but I don’t think it will not be able to keep up with taking screenshots, generating ffmpeg videos, and doing OCR that often and will drain your battery very quickly.
But if you / someone can get it to be efficient enough, awesome!
I think you underestimate computers. Taking 2fps screen recordings is a trivial task. Doing OCR may be slightly more work but at 2fps I doubt it is an issue. Worse case you could tune the OCR frequency based on the computer's abilities.
You're confusing 2fps with 1-screenshot-every-2-seconds (or 0.5fps), what the README actually says).
I wouldn't be surprised if the battery issue is problematic, likely will result in at least some kind of battery life reduction, but perhaps not 30 or 50% at 0.5fps.
I haven't looked into the code, but if you're running ffmpeg, then battery life will likely take a hit depending on what exactly you're doing. Video encoding _can be_ heavy on the CPU/GPU.
I have to agree. If you're interested in supporting Intel(x86/64), it's open source, and you sound like you have the hardware to add support for and test on Intel.
Not supporting? The commenter simply said it may cause battery drain. It is a discussion on the topic (both sides based purely on conjecture), and a relevant one. You disagreeing does not mean others are "gate keeping". Stop trying to weaponize trendy language and white knight this thread.
The original README was claiming that relies on Apple Silicon and that they have configured builds to exclude other Apple platforms. I see it has been greatly softened now to "Only tested on Apple Silicon, and the release is Apple Silicon" which I think is quite reasonable.
I have no problem with not supporting a platform because you have no interest or any other reason, but previously it was quite proud to not support it which is different.
Apple's suit alleged 20 separate patent infringements relating to the iPhone's user interface, underlying architecture and hardware. Steve Jobs exclaimed "We can sit by and watch competitors steal our patented inventions, or we can do something about it" The ITC rejected all but one of Apple's claims
> July 2008 Apple Inc. vs Psystar Corporation
Apple Inc. filed suit against Psystar Corporation alleging Psystar sold Intel-based systems with Mac OS X pre-installed and that, in so doing, violated Apple's copyright and trademark rights and the software licensing terms of Apple's shrink wrap license.
>2019, Apple v. Corellium
Apple sued security start-up Corellium for creating the first virtual iPhone-simulating software. The product was created with the intent of helping users research security issues in iOS. Apple's lawsuit argued that Corellium's product would be dangerous in the wrong hands as it would let hackers learn exploits easier, as well as claiming that Corellium was selling their product indiscriminately, even to potential competitors of Apple.
> Apple v. Samsung: Android phones and tablets
By August 2011, Apple and Samsung were engaged in 19 ongoing lawsuits in 12 courts in nine countries on four continents; by October, the fight expanded to 10 countries.
We take for granted how important spaceX and star link are to Americas political goals, so yes I consider those entities conspiring against them to be treasonous.
I would love to hear more about how spaceX and star link are important enough to America's political goals to justify prosecuting people in regulatory bodies for doing the jobs the people we elected appointed them to do.
While that certainly has been true for the majority of the last two years - rpi's in pretty much all skus have been back in stock for a few months now - to the point that even digikey has many of the skus in bulk stock
Oh, thanks! I've been watching that religiously and while you are correct the other Pi's have been available for a couple months now, the 2Ws have been absent. I haven't checked in a couple weeks now, I'm happy to see they are finally in stock!
Edit: it's a bit premature. Only Chicago Distributors have them in stock at the moment and they are limiting the orders to 1, which is frustrating because I only need 2.
If you can sign a yearly bandwidth commit (not sure what the minimum bandwidth requirement is, but 1PB / year may be in the ballpark) - you will get prices that are extremely competitive (maybe like 90%+ off base list pricing?)
You said 90% off which i’ve seen personally and amount to roughly 1c (most of the list starts at like 8-10c in the US). The sibling says they somehow got 90% off that original 90% which seems silly and def not something i would count on
What would your hypothetical egress arbitrage look like? Keep in mind we are specifically talking about cloudfront bandwidth - so being able to route to an upstream without that upstream also paying for bandwidth likely is not possible.