Hacker Newsnew | past | comments | ask | show | jobs | submit | more adgjlsfhk1's commentslogin

If these were faster than AES and as strong as AES, they would be replacing AES, not only being used for "lightweight devices unable to use AES"


They're faster than AES on their target platforms. It really feels like people are just trying to run with this out-of-context Matthew Green quote as if it was an axiom.


Rijndael (now AES) wasn't even the strongest finalist in the 2001 AES evaluation. It partially won on dint of being faster on contemporary x86 processors than Serpent or Twofish. Nowadays, it's faster on x86-64 processors because there's dedicated silicon for running it. Modern small platforms don't have this silicon and have different performance characteristics to consider. Also, without that dedicated silicon, implementations tend to be vulnerable to side-channel attacks that were unknown at the time.


> If these were faster than AES and as strong as AES […]

Not everything needs to be as strong as AES, just "strong enough" for the purpose.

Heck, the IETF has published TLS cipher suites with zero encryption, "TLS 1.3 Authentication and Integrity-Only Cipher Suites":

* https://datatracker.ietf.org/doc/html/rfc9150

Lightweight cryptography could be a step between the above zero and the 'heavyweight' ciphers like AES.


NULL ciphers in TLS are intended to enable downgrade attacks. Nothing else.

Same thing with weaker ciphers. They are a target to downgrade to, if an attacker wishes to break into your connection.


> NULL ciphers in TLS are intended to enable downgrade attacks. Nothing else.

Intended... Do any experts think that? Can you cite a couple? Or direct evidence of course.

Unless I'm missing a joke.


Thought this was common knowledge. When TLS1.3 was standardized, it explicitly left out all NULL and weak (such as RC4) ciphers. It also left out all weaker RSA/static-DH key exchange methods, such that easy decryption of recorded communication became impossible. To that the enterprises who would like to snoop on their employees and the secret services who would like to snoop on everyone reacted negatively and tried to introduce their usual backdoors such as NULL ciphers again:

https://www.nist.gov/news-events/news/2024/01/new-nccoe-guid... with associated HN discussion https://news.ycombinator.com/item?id=39849754

https://www.rfc-editor.org/rfc/rfc9150.html was the one reintroducing NULL ciphers into TLS1.3. RFC9150 is written by Cisco and ODVA who previously made a fortune with TLS interception/decryption/MitM gear, selling to enterprises as well as (most probably, Cisco has been a long-time bedmate of the US gov) spying governments. The RFC weakly claims "IoT" as the intended audience due to cipher overhead, however, that is extremely hard to believe. They still do SHA256 for integrity, they still do all the very complicated and expensive TLS dance, but then skip encryption and break half the protocol on the way (since stuff like TLS1.3 RTT needs confidentiality). So why do all the expensive TLS dance at all when you can just slap a cheaper HMAC on each message and be done? The only sensible reason is that you want to have something in TLS to downgrade to.


How exactly do NULL ciphers accomplish enterprise monitoring goals? The point of the TLS 1.3 handshake improvements was to eliminate simple escrowed key passive monitoring. You could have the old PKZip cipher defined as a TLS 1.3 ciphersuite; that doesn't mean a middlebox can get anybody to use it. Can you explain how this would get any enterprise any access it doesn't already have?


> How exactly do NULL ciphers accomplish enterprise monitoring goals?

I don't understand how this isn't obvious. Unencrypted means it is monitorable.


The presence of an insecure ciphersuite in the TLS standard does not in fact imply the ability of a middlebox to force that ciphersuite; that's kind of the whole point of the TLS protocol. So, I ask again.


Your first set of links seems to be about central key logging for monitoring connection contents? If there's stuff about null encryption in there I missed it. And even if there is, it all sounds like explicit device configuration, not something you can trigger with a downgrade attack.


Yes, my first link is about that. It illustrates and explains the push to weaken TLS1.3 that has later been accomplished by the re-inclusion of NULL ciphers.

And all the earlier weaker ciphers were explicit device configuration as well. You could configure your webserver or client not to use them. But the problem is that there are easy accidental misconfigurations like "cipher-suite: ALL", well-intended misconfigurations like "we wan't to claim IoT support in marketing, so we need to enable IoT-'ciphers' by default!" and the sneaky underhanded versions of the aforementioned accidents. Proper design would actually just not create a product that can be mishandled, and early TLS1.3 had that property (at least with regards to cipher selection). Now it's back to "hope your config is sane" and "hope your vendor didn't screw up". Which is exactly what malicious people need to hide their intent and get in their decryption backdoors.


The first link is weakening in a way that is as far from a downgrade attack as you can possibly get. And on top of that TLS 1.3 has pretty good downgrade prevention as far as I know.

> well-intended misconfigurations like "we wan't to claim IoT support in marketing, so we need to enable IoT-'ciphers' by default!" and the sneaky underhanded versions of the aforementioned accidents

Maybe... This still feels like a thing that's only going to show up on local networks and you don't need attacks for local monitoring. Removing encryption across the Internet requires very special circumstances and also lets too many people in.


Most modern processors have hardware support for AES, that's why it's fast. ChaCha is significantly faster when run on the CPU


Security standards can move extremely slowly when the security of the incumbent algorithm hasn’t been sufficiently compromised, despite better (faster, smaller) alternatives.

Tech Politics comes into it.


I mean, they are faster and as strong and are gradually replacing it?


Sponges generally? Maybe? LWC constructions not so much?


I thought the “they” being referenced were chacha/salsa.


laws can be repealed when they no longer accomplish their aims.


It has a couple of them. Flag registers, global rounding modes, relatively small page size, strong memory ordering guarantees, complicated decoding (and I'm sure there are a few more). It's more that it was good enough and managed to sufficiently mitigate most of the problems pretty well.


Ubuntu is specifically a noob friendly desktop OS. No reason for them to bother supporting slow buggy embedded CPUs.


A person might suggest that it would be more user friendly to support hardware that people own. There's a parallel to years ago, when Ubuntu was a lot like Debian but it was willing to ship non-free firmware by default because that was the hardware that people actually had, which is part of why it was praised as noob friendly.


By that argument, I can factor a 100000000 bit number on my computer in a second.


it's the opposite. scammers want the people that are gullible enough to go for "free"


This is a narrative I've heard many times, with very little evidence to back it up. An alternative and more accurate view is that, as the world came online, people became exposed to the very low-effort scams, representative of criminal elements from around the world, which befuddled most due to their child-like naivety. None of those confused individuals would fall for it but they require an explanation. Someone came up with a theory that it's actually a stroke of 4D genius and it stuck.

edit: ok, I bothered to look this up: Microsoft had a guy do a study on nigerian scams, the guys who wrote Freakonomics did a sequel referencing that study and drew absurb unfounded conclusions, which have been repeated over and over. Business as usual for the fig-leaf salesmen.


right, the other is that if you remove every incorrect statement from the AI "explanation", the answer it would have given is "airplane wings generate lift because they are shaped to generate lift".


> right, the other is that if you remove every incorrect statement from the AI "explanation", the answer it would have given is "airplane wings generate lift because they are shaped to generate lift".

...only if you omit the parts where it talks about pressure differentials, caused by airspeed differences, create lift?

Both of these points are true. You have to be motivated to ignore them.

https://www.youtube.com/watch?v=UqBmdZ-BNig


But using pressure differentials is also sort of tautological. Lift IS the integral of the pressure on the surface, so saying that the pressure differentials cause lift is... true but unsatisfying. It's what makes the pressure difference appear that's truly interesting.

Funnily enough, as an undergraduate the first explanation for lift that you will receive uses Feynman's "dry water" (the Kutta condition for inviscid fluids). In my opinion, this explanation is also unsatisfying, as it's usually presented as a mere mathematical "convenience" imposed upon the flow to make it behave like real physics.

Some recent papers [1] are shedding light on generalizing the Kutta condition on non-sharp airfoils. In my opinion, the linked papers gives a way more mathematically and intuitively satisfying answer, but of course it requires some previous knowledge, and would be totally inappropriate as an answer by the AI.

Either way I feel that if the AI is a "pocket PhD" (or "pocket industry expert") it should at least give some pointers to the user on what to read next, using both classical and modern findings.

[1]: https://www.researchgate.net/publication/376503311_A_minimiz...


The Kutta condition is insufficient to describe lift in all regimes (e.g. when the trailing edge of the wing isn't that sharp), but fundamentally you do need to fall back to certain 2nd law / boundary condition rules to describe why an airfoil generates lift, as well as when it doesn't (e.g. stall).

There's nothing in the Navier-Stokes equations that forces an airfoil to generate lift - without boundary conditions the flowing air could theoretically wrap back around at the trailing edge, thus resulting in zero lift.


The fact that you have to invoke integrals and the Kutta condition to make your explanation is exactly what is wrong with it.

Is it correct? Yes. Is it intuitive to someone who doesn’t have a background in calculus, physics and fluid dynamics? No.

People here are arguing about a subpoint on a subpoint that would maybe get you a deduction on a first-year physics exam, and acting as if this completely invalidates the response.


How is the Kutta condition ("the fluid gets deflected downwards because the back of the wing is sharp and pointing downwards") less intuitive to someone without a physics background than wrongly invoking the Bernoulli principle?


One is common knowledge, taught in every elementary school. The other is not.


Every elementary school teaches the Bernoulli equation?


the problem is that the "real" explanation is "solve navier stokes on the wing". everything else is just trying to build semi-reliable intuition.


I think humans generally use some form of bucket/radix sorting (or selection sort for small collections)


A human is different than “humans” a human with a stack may sort it into four stacks and then sort amongst them, yes.

But a room of five clerks all taking tasks off a pile and then sorting their own piles is merge sort at the end of the day. Literally, and figuratively.


in the real world djikstra will definitely be faster.


It’s not often that you see O(E + V log V) Dijkstra with Fibonacci heaps, either, the O((E + V) log V) version with plain binary heaps is much more popular. I don’t know if that’s because the constants for a Fibonacci heap are worse or just because the data structure is so specialized.


Yes, a standard binary heap is very fast and incredibly simple to implement, mostly because you can store the entire heap in a single continuous array, and because you can access individual elements by simple pointer arithmetic. It's quite hard to beat this in practice.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: