A crude explanation might be like this - if you look at graphs of sin and cos, you'd instantly recognize their symmetries, but what if you're given the graph of a linear combination of them, and asked to decipher the coefficients?
Naively, you'd evaluate the functions at every point by trial & error until they much the shape of the given graph. Or use the symmetry of sin & cos to combine them constructively and destructively (peaks and valley) and to match the given shape.
FT & QFT are "shortcuts" that help to decipher the correct combination of basis functions.
I think it would be hard to explain the details to a math undergraduate.
The high level point is that many algorithms for which quantum speedups are possible can be reduced to the Hidden Subgroup Problem, which requires a few weeks of a group theory course to understand.
No it wouldn't? "Given f that hides a subgroup, and an oracle for f, determine the subgroup".
The Wikipedia phrases it backwards (as do you): "quantum computers" don't solve the hidden subgroup problem, the quantum fourier transform, which "measures" f in parallel, can be used to solve the hidden subgroup problem efficiently. The QFT is the fundamental thing, not the HSP, and it's the building block for basically any/all useful quantum algorithms.
Very late, but I meant it would be hard to summarize the details of the blog post (e.g. QFT implementation) even to the typical math undergraduate.
It's not hard to explain the problem, as most math undergraduates will have taken some group theory course. But even "subgroup" means nothing to most computer scientists (speaking from personal experience interacting with computer scientist academics).
The Hidden Subgroup Problem (HSP) is the mathematical framework that explains why quantum computers are so good at breaking encryption. Famous quantum algorithms like Shor's (for factoring numbers) and Simon's are all just different versions of solving the same underlying pattern-finding problem. The quantum "superpower" comes from using the Quantum Fourier Transform to reveal hidden mathematical patterns that classical computers can't efficiently find.
The multi-line case can usually be fixed with simple configuration changes to a structured log format.
The other cases are more interesting, and pre-aggregation of all logs related to a correlation ID can be really helpful when debugging a specific incident, but it does seem like this proposal is the same basic trade-off around size and performance as with virtually any form of compression.
It’s not so bad when there’s at least sprint toggling (often an option in gameplay settings). Having to hold it down continuously can be a bit much, especially in games where you basically want to sprint 90%+ of the time.
How is logging into ssh (sshd) AS root more secure than using sudo? I honestly don’t even know how dangerous that is because I’ve always been told to never allow it. I see here thought goes into preventing that for a remote user, so I’m not talking about that aspect of security here.
Maybe it has to do with #3 in the sudo limitations — I certainly don’t see any benefits vis-a-vis #1.
I totally get that this is an experiment, but I suspect it is more vulnerable than using sudo, not less (the open socket proxy looks interestingly vulnerable to a man in the middle attack).
Having said all that, I did learn some tricks old tools are capable of, so kudos for showing me something new.
The sudo binary is suid root / privileged and is exposed directly to the untrusted user. If anything goes wrong inside of sudo (with the user's entire environment as the surface area), it may be exploited.
The ssh approach does not expose a suid binary. Instead it uses the ssh network layer so it is no less secure than accessing ssh over a network, which is considered pretty secure.
I would assume if you has to use SSH or sudo you've already lost. I've been working with people where we just completely lock down the VM or Container. They only allow necessary flow of traffic and are managed entirely from golden builds. If you need to make changes or fix something it is a new vm or container.
This premise is incorrect: SSH doesn't need to be an suid binary because it's already running as root, and then SSH creates a new environment for the user, exactly like sudo does, but with all the added complexity and overhead (and surface) of privileged network access.
To be clear, I love SSH and we even run a userify instance to distribute keys, but juts comparatively the surface area of the ssh daemon alone is greater than sudo alone.
(however, even with the extra complexity, you might trust the history of portable OpenSSH more than sudo, and that's a good, but different, conversation to have also.)
But the area under control by the invoking user is data over one socket vs the whole calling environment e.g. environment vars, local files. Surely that counts for something.
We've got root passwords set on, IIRC, all of our systems. They're long, random, and can only be entered through the console on the VGA port or the IPMI console.
A big part of sudo is that you should be running individual commands using sudo to increase auditability rather than simply running sudo bash or whatever.
> If ‘sudo’ is properly configured running bash or anything that allows command execution (vim, eMacs, etc) is disallowed.
Keep in mind that this is borderline impossible to enforce unless your goal is just to stop the most common ways of accidentally breaking the policy. A list of commands that allows breaking out into a full shell includes: less, apt, man, nano, wget & many more.
This made me chuckle. Apple influencing the way Emacs is capitalized (pun intended) versus RMS's stance on Free Software couldn't be further apart I think.
You're correct there! Wrote that up on my tiny Apple device and really couldn't be bothered to correct Apple's spellcheck. Text editing from a 5in touchscreen is very painful.
> How is logging into ssh (sshd) AS root more secure than using sudo?
Article describes an additional SSH server listening on an Unix socket. The usual threat model about exposing root logins from the internet may not apply here.
The approach is comparing
- Theoretical configuration errors, or theoretical vulnerabilities that may or may not be there with
- Having a new daemon running (a new attack surface) which
- may also have configuration errors, or vulnerabilities as such
- and also removes a few layers of user based authorisation with a single root level
This approach is somehow considered more secure.
And in a rational way, and of course for any rational security perspective this can't be considered more secure, just different.
I'm skeptical of the approach in the linked article, but:
> I honestly don’t even know how dangerous that is because I’ve always been told to never allow it.
You've fallen for the FUD. In reality, logging in directly as root over remote SSH is strictly more secure than logging in as user over remote SSH and then using `sudo`.
If user@home uses ssh to root@server, then root@server is only compromised if user@home is compromised.
If user@home uses ssh to user@server then sudo to root@server, then root@server is compromised if either user@home or user@server is compromised. In particular, it is fairly common for user@server to be running some other software such as daemons or cronjobs. Please don't give out free root escalation (and often lateral movement due to password reuse) to anyone who manages to infect through those!
(This of course does not apply if sudo is used in whitelisted-commands-only mode and does not take either passwords or credentials fully accessible from the remote host)
I'm not sure I agree with this argument. Sure you can say theoretically it's one less account that could be compromised, but in practice I see a bunch of caveats.
1. If we allow password based logins, there will be many orders of magnitude more login attempts to root than any other user. So if you have to allow password based logins, you pretty much never want to allow root login.
2. If we disallow password based logins, a user account would be as save as a root login, except again that the root account is the much more valuable target so will get much more attention. I also do see the relevance of cronjobs (root does run them as well) and naturally no user that has sudo privileges should be be running network exposed services.
3. In cases were admin rights have to be shared amongst multiple users, are you going to share the same key for all users (probably not a good idea) or give every user a separate key (making key management a bit of a nightmare, user management is much easier).
4. As you pointed out yourself sudo gives you much more fine-grained control over commands that can be run.
> 3. In cases were admin rights have to be shared amongst multiple users, are you going to share the same key for all users (probably not a good idea) or give every user a separate key (making key management a bit of a nightmare, user management is much easier).
To solve the key management nightmare, short-lived SSH certificates can be used to map an identity to a shared user account. Hashicorp Vault is one option for issuing such certificates, but there are other alternatives as well.
The big advantage is if setuid and setgid support can be entirely removed. There are a bunch of special cases that have been added over the years to try to deal but increasing priviledges of a process is fundamentally more challenging in the unix security model than only ever lowering priviledges. Of course these days Linux has priviledge escalation via user namespaces as well.
Jerk, Snap, Crackle, and Pop are the only ones I thought had agreed upon names. But my understanding is probably 20 years out of date at this point.
However, the paper says they’re not commonly taught, but jerk is taught in many high school (AP) Physics classes — we have to keep our balance by noticing the change in acceleration.
The GBA has native support for integer division, but it does not have any hardware support for floating point arithmetic at all — compilers will insert a virtual floating point unit that does IEEE 754 in software, which takes hundreds of CPU cycles for most operations. I believe the LUT you refer to is for floats.
I think the edge cases were entirely unclear in day 1, part 2. I had to redo it in a "dumb"/brute-force way to avoid using fancy regex tricks I don't know.
It's quite clear the small sample data was chosen intentionally to not cover them.
The problem statement was super clear though. "Find the first occurrence of any one of these strings in a longer string" doesn't require any fancy regex tricks, just a for loop and knowledge about `isPrefixOf` or `startsWith` or whatever the equivalent function is called in your language of choice.
"Find the last occurrence of any one of these strings in a longer string" is just the first problem again but with all the strings reversed.
Disagree that it's clear. If the text is "oneight", there is a legitimate philosophical question about whether the string contains two numbers. I feel like most people would say no, because the only way to answer yes is by re-using letters between different words, and that's not how language works.
my example data did include the edge case that caught me out in part two, but didn't include it in a way that broke my first pass at a solution.
funny piece is it didn't click until i submitted a wrong answer, read my code for a few minutes, added a bunch of logging, and then saw the trick. i looked back at the given example and it was right there the whole time, just not called out explicitly.
> It's quite clear the small sample data was chosen intentionally to not cover them.
That is very common in AOC, the edge cases are often not called out.
Although the results vary a lot, sometimes the edge cases matter, sometimes they only matter for some data sets (so you can have a solve for your dataset but it fails for others), and sometimes the edge cases don’t matter at all, so you might spend a while unnecessarily considering and handling them all.
The edge cases are fine tho, it’s a normal thing which happens. The not fun ones are when ordering comes into play to resolve ambiguities and is not called out, it’s quite rare but when it happens it’s infuriating because it’s very hard to debug as you don’t really have a debugging basis.
I have seen accessibility tools in Chrome lead to this kind of issue in the past with a dropdown menu -- to the point where it could be replicated with a miniscule amount of HTML. The particular bug I hit 2 years ago was in Chromium-Edge, but the symptoms and cause were very similar.
Grammarly almost certainly leans on some of the accessibility tools in Chrome. These tools are somewhat different in the various Chromium flavors (Edge, Brave, Chrome, etc.).
So the theory would be that grammarly desktop sees the gif (what? How?) and calls some browser accessibility function on it (or?) which chrome cant handle and it crashes?
With the bug I saw years ago, just having certain accessibility features of the browser enabled _at all_ caused the bug (we were able to temporarily mitigate by disabling some obscure Edge accessibility feature via a launch parameter). So, my theory here is Grammarly is just enabling an optional accessibility feature in Chrome that has this bug when trying to "read" the gif.
Can anyone ELI5?