Also the (provocatively titled) "Let's Remove Quaternions from every 3d Engine" [1]
Spoiler alert: rotors are mechanically identical to quaternions, while being easier to understand. If you understand rotors, you understand quaternions. You can fit the laws you need to understand rotors on a business card.
Plus, rotors abstract to higher and lower (well, there's only one plane and its two respective orientations in 2d, but still) dimensions.
Complex numbers as planes (bivectors in GA parlance) has been the most mind-opening mathematical concept I've been exposed to in the last decade. The associated geometric product has helped me better understand concepts (like "handedness") that troubled me during undergrad engineering.
I had never even heard of rotors! Thanks for this. I watched that video. The video doesn't really explain how it extends to higher dimensions tho, that I could discern.
I wonder how/if any of this can be applied to LLMs 'Semantic Space'. As you might know, Vector Databases are used a lot (especially with RAG - Retrieval Augmented Generation) mainly for Cosine Similarity, but there is a 'directionality' in Semantic Space, and so in some sense we can treat this space as if it's real geometry. I know a TON of research is done in this space, especially around what they call 'Mechanistic Interpretability' of LLMs.
> The video doesn't really explain how it extends to higher dimensions tho, that I could discern.
The neat thing is that it "extends" automatically. The math is exactly the same. You literally just apply the same fundamental rules with an additional basis vector and it all just works.
MacDonald's book [1] proves this more formally. Another neat thing is there are two ways to prove it. The first is the geometric two-reflections-is-a-rotation trick given in the linked article. The second is straightforward algebraic manipulation of terms via properties of the geometric product. It's in the book and I can try to regurgitate it here if there's interest; I personally found this formulation easier to follow.
If you really want your mind blown, look into the GA formulation of Maxwell's laws and the associated extension to the spacetime (4d) algebra, which actually makes them simpler. That's derived in MacDonald's book on "Geometric Calculus" [2]. There's all kinds of other cool ideas in that book like a GA formulation of the fundamental law of calculus from which you can derive a lot of the "lesser" theorems like Green's law.
Take all of this with a grain of salt. I'm merely an enthusiast and fan, not an expert. And GA unfortunately has (from what I can tell) some standardization and nomenclature issues (e.g. disagreement over the true "dot product" among various similar but technically distinct formulations)
> I wonder how/if any of this can be applied to LLMs 'Semantic Space'.
Yeah, an interesting point. Geometric and linear algebra are two sides of the same coin; there's a reason why MacDonald's first book is called _Linear and_ Geometric Algebra. In that sense, Geometric Algebra is another way of looking at common Linear Algebra concepts where algebraic operations often have a sensible geometric meaning.
Interesting ideas there thanks. I do know about that Maxwell derivation that involves Minkowski space, Lorentz transform consistency, etc, although I haven't fully memorized how it works, so that I can conjure up how it works from memory. I don't really think in equations, I think in visualizations, so I know a lot more than I can prove with math. You're right it's mind-blowing stuff for people like us that are interested in it.
Where you drive and how long you stay there at night, correlated with your partner's connected car.
It's not a camera in the bedroom but you can pretty easily extract relationship graphs from geolocation tracking and proximity. US intelligence agencies have been doing it in the middle east for ages...
Yes, technically. But the broader point is true. Go is a game with well-defined win and loss conditions that can be automatically evaluated.
This is critical for game-clock-eons of unsupervised self-play, which by most accounts is how AlphaGo (and other systems like AlphaZero) made the leap to superhuman levels of play.
But it is entirely different from subjective endeavors like writing, music, and art. How do you score one automatically generated composition vs another? Where is the loss function?
Stipulating up front that this is a question for a lead scientist at OpenAI: I could see a scoring function looking at essays in the New York Times vs. the National Enquirer and finding a way to generalize from there. Similarly for the top 40 hit songs vs <everything else>.
I'm not? The history of AI development is littered with examples of false starts, hidden traps, and promising breakthroughs that eventually expose deeper and more difficult problems [1].
I wouldn't be shocked if it could eventually get it right, but dead sure?
that's an obviously good question to ask, and i'm not sure what the answer is. the original ccn work by van jacobson et al. doesn't really attempt any security. one obvious thing to try is for a router to rate-limit its forwarding of the interest packets coming in on any one port, especially if it has a lot of active interest packets from that port already (in ndn systems the router has to remember where interest packets came from so that it can forward any answering data packets out the right port, so this doesn't require maintaining any extra data.) but i don't know if existing ndn work tackles this problem or what approaches have been found to work
Sure; but distributed mesh networks feel like another area where Sybil Attacks [1] can rear their ugly heads. This is a fundamentally hard problem to solve in all distributed systems without a coordinating authority.
The blockchain approach basically bootstraps said authority, and comes with tons of additional baggage. It's the only one I'm aware of that has real countermeasures to Sybil attacks though (in a sense; 51% attacks can also look a lot like a Sybil attack with the right glasses on)
i agree about the blockchain, though now possibly there are multiple viable approaches under that rubric now that ethereum has abandoned pow
there are a lot of possible approaches to the problem, though
hashcash for announcing new destinations is one possible measure
reserving most of the bandwidth for established connections to which both parties are indicating their continuing enthusiastic consent is another, and one which the circuit-switched pstn does implicitly, which is why your phone calls wouldn't drop or skip when there was a commercial break in prime-time tv. reticulum seems to do this by reserving 97% of bandwidth for non-announcement traffic, though i might be misunderstanding
the agorics 'digital silk road' paper from 01995 (unrelated to the later darknet market) describes another approach: include a source route and postage on every packet, with each network provider deducting however much postage it feels like deducting, and keeping track of bilateral account balances with each peer network (postage sent minus postage received). if the postage runs out, your packet gets lost, and the sender can choose to try resending it over the same route with more postage or routing it over a less expensive network. periodic cash settlement between network service providers zeroes out any persistent imbalance in cash sent and received: http://www.cap-lore.com/Agorics/Library/dsr.html. i don't think this has ever been implemented, though current nsp peering agreements do resemble it to some degree. the original paper suggested using eft for the periodic settlements, but nowadays you could very reasonably do it with something like ether
you can supplement the established-connection-prioritizing mechanism with a granovetter-diagram introducer mechanism: if alice has an established connection to bob, and bob has an established connection to carol, bob can dedicate some of his bandwidth on those two connections to passing messages between alice and carol. he can do this either completely at the application layer, with no support from lower layers of the network, like an irc server or turn server, or potentially with support from the network layer to find a more efficient and reliable route. (reticulum seems to support this; bob can pass alice's destination to carol or vice versa, which the manual seems to say eliminates the need for the announce mechanism, though i don't yet understand how the protocol works.) this allows bob to prioritize such connection requests using information that isn't available at the network layer, such as whether alice and/or carol have passed a captcha and/or are paying him, who they were introduced to him by, who they've introduced him to in the past, and what valuable data they've sent him. if bob repeatedly introduces alice to people she doesn't like talking to, she might cut off contact with bob, or at least look on his future introductions with a jaundiced eye and not allocate them much bandwidth
fwiw, those seem to apply to only a single destination, and any node can sybil up as many destinations as it wants, right? `announce_cap` seems more relevant
is there a place where you've written down the threat model reticulum is intended to defend against? it's hard for me to evaluate its security measures without that context
I'm not sure there is a formal threat model yet (I'm not a maintainer), but there has been discussion regarding this topic. You can checkout the Github forum page (https://github.com/markqvist/Reticulum/discussions) and there is also an Element channel at #reticulum:matrix.org
The threat model would be highly dependent on the carrier used. For example, if you're using LoRa an adversary would be using far different methods of disruption when compared to a traditional overlay network.
You pay the court clerk [1]. You either pay in person, or possibly online depending on the jurisdiction. If you aren't paying in person, you'll need to contact them directly anyway, and their contact info should be easy to verify. It's not like they're going to be sitting next to the detained party on the phone with a bitcoin deposit address.
Generative AI. Anything that can create detailed content out of a broad / short prompt. This currently means diffusion for images, large language models for text. That may change as multi-modality and other developments play out in this space.
This capability is clearly different from the examples you list.
Just because there may be no precise engineering definition does not mean that we cannot arrive at a suitable legal/political definition. The ability to create new content out of whole cloth is quite separate from filters, cropping, and generic "pre-AI" image post-processing. Ditto for spellcheck and word processors for text.
How do you expect to regulate this and prove generative models were used? What stops a company from purchasing art from a third party where they receive a photo from a prompt, where that company isn't US based?
> How do you expect to regulate this and prove generative models were used?
Disseminating or creating copies of content derived from generative models without attribution would open that actor up to some form of liability. There's no need for onerous regulation here.
The burden of proof should probably lie upon whatever party would initiate legal action. I am not a lawyer, and won't speculate further on how that looks. The broad existing (and severely flawed!) example of copyright legislation seems instructive.
All I'll opine is that the main goal here isn't really to prevent Jonny Internet from firing up llama to create a reddit bot. It's to incentivize large commercial and political interests to disclose their usage of generative AI. Similar to current copyright law, the fear of legal action should be sufficient to keep these parties compliant if the law is crafted properly.
> What stops a company from purchasing art from a third party where they receive a photo from a prompt, where that company isn't US based?
Not really sure why the origin of the company(s) in question is relevant here. If they distribute generative content without attribution, they should be liable. Same as if said "third party" gave them copyright-violating content.
EDIT: I'll take this as an opportunity to say that the devil is in the details and some really crappy legislation could arise here. But I'm not convinced by the "It's not possible!" and "Where's the line!?" objections. This clearly is doable, and we have similar legal frameworks in place already. My only additional note is that I'd much prefer we focus on problems and questions like this, instead of the legislative capture path we are currently barrelling down.
> It's to incentivize large commercial and political interests to disclose their usage of generative AI.
You would be okay allowing small businesses exception from this regulation but not large businesses? Fine. As a large business I'll have a mini subsidiary operate the models and exempt myself from the regulation.
I still fail to see what the benefit this holds is. Why do you care if something is generative? We already have laws against libal and against false advertising.
> You would be okay allowing small businesses exception from this regulation but not large businesses?
That's not what I said. Small businesses are not exempt from copyright laws either. They typically don't need to dedicate the same resources to compliance as large entities though, and this feels fair to me.
> I still fail to see what the benefit this holds is.
I have found recent arguments by Harari (and others) that generative AI is particularly problematic for discourse and democracy to be persuasive [1][2]. Generative content has the potential, long-term, to be as disruptive as the printing press. Step changes in technological capabilities require high levels of scrutiny, and often new legislative regimes.
EDIT: It is no coincidence that I see parallels in the current debate over generative AI in education, for similar reasons. These tools are ok to use, but their use must be disclosed so the work done can be understood in context. I desire the ability to filter the content I consume on "generated by AI". The value of that, to me, is self-evident.
> They typically don't need to dedicate the same resources to compliance as large entities though, and this feels fair to me.
They typically don't actually dedicate the same resources because they don't have much money or operate at sufficient scale for anybody to care about so nobody bothers to sue them, but that's not the same thing at all. We regularly see small entities getting harassed under these kinds of laws, e.g. when youtube-dl gets a DMCA takedown even though the repository contains no infringing code and has substantial non-infringing uses.
> They typically don't actually dedicate the same resources because they don't have much money or operate at sufficient scale for anybody to care about so nobody bothers to sue them
Yes, but there are also powerful provisions like section 230 [1] that protect smaller operations. I will concede that copyright legislation has severe flaws. Affirmative defenses and other protections for the little guy would be a necessary component of any new regime.
> when youtube-dl gets a DMCA takedown even though the repository contains no infringing code and has substantial non-infringing uses.
Look, I have used and like youtube-dl too. But it is clear to me that it operates in a gray area of copyright law. Secondary liability is a thing. Per the EFF excellent discussion of some of these issues [2]:
> In the Aimster case, the court suggested that the Betamax defense may require an evaluation of the proportion of infringing to noninfringing uses, contrary to language in the Supreme Court's Sony ruling.
I do not think it is clear how youtube-dl fares on such a test. I am not a lawyer, but the issue to me does not seem as clear cut as you are presenting.
> Yes, but there are also powerful provisions like section 230 [1] that protect smaller operations.
This isn't because of the organization size, and doesn't apply to copyright, which is handled by the DMCA.
> But it is clear to me that it operates in a gray area of copyright law.
Which is the problem. It should be unambiguously legal.
Otherwise the little guy can be harassed and the harasser can say maybe to extend the harassment, or just get them shut down even if is is legal when the recipient of the notice isn't willing to take the risk.
> > In the Aimster case, the court suggested that the Betamax defense may require an evaluation of the proportion of infringing to noninfringing uses, contrary to language in the Supreme Court's Sony ruling.
Notably this was a circuit court case and not a Supreme Court case, and:
> The discussion of proportionality in the Aimster opinion is arguably not binding on any subsequent court, as the outcome in that case was determined by Aimster's failure to introduce any evidence of noninfringing uses for its technology.
But the DMCA takedown process wouldn't be the correct tool to use even if youtube-dl was unquestionably illegal -- because it still isn't an infringing work. It's the same reason the DMCA process isn't supposed to be used for material which is allegedly libelous. But the DMCA's process is so open to abuse that it gets used for things like that regardless and acts as a de facto prior restraint, and is also used against any number of things that aren't even questionably illegal. Like the legitimate website of a competitor which the claimant wants taken down because they are the bad actor, and which then gets taken down because the process rewards expeditiously processing takedowns while fraudulent ones generally go unpunished.
> This isn't because of the organization size, and doesn't apply to copyright, which is handled by the DMCA.
Ok, I'll rephrase: the clarity of its mechanisms and protections benefits small and large organizations alike.
My understanding is that it no longer applies to copyright because the DMCA and specifically OCILLA [1] supersede it. I admit I am not an expert here.
> Which is the problem. It should be unambiguously legal.
I have conflicting opinions on this point. I will say that I am not sure if I disagree or agree, for whatever that is worth.
> But the DMCA takedown process wouldn't be the correct tool to use even if youtube-dl was unquestionably illegal
This is totally fair. I also am not a fan of the DMCA and takedown processes, and think those should be held as a negative model for any future legislation.
I'd prefer for anything new to have clear guidelines and strong protections like Section 230 of the CDA (immunity from liability within clear boundaries) than like the OCILLA.
> I desire the ability to filter the content I consume on "generated by AI". The value of that, to me, is self-evident.
You should vote with your wallet and only patronize businesses that self disclose. You don't need to create regulation to achieve this.
With regards to the articles, they are entirely speculative, and I diaagree wholly with them, primarily because their premise is that humans are not rational amd discerning actors. The only way AI generates chaos in these instances is by generating so much noise as to make online discussions worthless. People will migrate to closed communities of personal or near personal acquaintances (web of trust like) or to meatspace.
Here are some paragrahs I fpund especially egregious:
> In recent years the qAnon cult has coalesced around anonymous online messages, known as “q drops”. Followers collected, revered and interpreted these q drops as a sacred text. While to the best of our knowledge all previous q drops were composed by humans, and bots merely helped disseminate them, in future we might see the first cults in history whose revered texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their holy books. Soon that might be a reality.
Dumb people will dumb. People with different values will different. I see no reason that AI offers increased risk to cult followers of Q. If someone isn't going to take the time to validate their sources, the source doesn't t much matter.
> On a more prosaic level, we might soon find ourselves conducting lengthy online discussions about abortion, climate change or the Russian invasion of Ukraine with entities that we think are humans—but are actually ai. The catch is that it is utterly pointless for us to spend time trying to change the declared opinions of an ai bot, while the ai could hone its messages so precisely that it stands a good chance of influencing us.
In these instances, does it mayter that the discussion is being held with AI? Half the use of discussion is to refine one's own viewpoints by having to articulate one's position and think through cause and effect of proposals.
> The most interesting thing about this episode was not Mr Lemoine’s claim, which was probably false. Rather, it was his willingness to risk his lucrative job for the sake of the ai chatbot. If ai can influence people to risk their jobs for it, what else could it induce them to do?
Intimacy isn't necessarily the driver for this. It very well could have been Lemoine's desire to be first to market that motivated the claim, or a simple misinterpreted singal al la Luk-99.
> Even without creating “fake intimacy”, the new ai tools would have an immense influence on our opinions and worldviews. People may come to use a single ai adviser as a one-stop, all-knowing oracle. No wonder Google is terrified. Why bother searching, when I can just ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can just ask the oracle to tell me the latest news? And what’s the purpose of advertisements, when I can just ask the oracle to tell me what to buy?
Akin to the concerns of scribes during the times of the printing press. The market will more efficiently reallocate these workers. Or better yet, people may still choose to search to validate the output of a statistical model. Seems likely to me.
> We can still regulate the new ai tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, ai can make exponentially more powerful ai. The first crucial step is to demand rigorous safety checks before powerful ai tools are released into the public domain.
Now we get to the point: please regulate me harder. What's to stop a more powerful AI from corrupting the minds of the legislative body through intimacy or other nonsense? Once it is sentient, it's too late, right? So we need to prohibit people from multiplying matrices without government approval right now. This is just a pathetic hit piece to sway public opinion to get barriers of entry erected to protect companies like OpenAI.
Markets are free. Let people consume what they want so long as there isnt an involuntary externality, and conversing with anons on the web does not guarantee that you're speaking with a human. Both of us could be bots. It doesn't matter. Either our opinions will be refined internally, we will make points to influence the other, or we will take up some bytes in Dang's database with no other impact.
> You should vote with your wallet and only patronize businesses that self disclose. You don't need to create regulation to achieve this.
This is a fantasy. It seems very likely to me that, sans regulation, the market utopia you describe will never appear.
I am not entirely convinced by the arguments in the linked opinions either. However, I do agree with the main thrust that (1) machines that are indistinguishable from humans are a novel and serious issue, and (2) without some kind of consumer protections or guardrails things will go horribly wrong.
> This is a fantasy. It seems very likely to me that, sans regulation, the market utopia you describe will never appear.
I strongly disagree. I heard the same arguments about how Google needs regulation because nobody could possibly compete. A few years later we have DDG, Brave Search, Searx, etc.
This is a ridiculous proposal, and obviously not doable. Such a law can't be written in a way that complies with First Amendment protections and the vagueness doctrine.
It's a silly thing to want anyway. What matters is whether the content is legal or not; the tool used is irrelevant. Centuries ago some authoritarians raised similar concerns over printing presses.
> Such a law can't be written in a way that complies with First Amendment protections and the vagueness doctrine.
I disagree. What is vague about "generative content must be disclosed"?
What are the first amendment issues? Attribution clearly can be required for some forms of speech, it's why every political ad on TV carries an attribution blurb.
> It's a silly thing to want anyway. What matters is whether the content is legal or not; the tool used is irrelevant.
Again, I disagree. The line between tools and actors will only blur further in the future without action.
> Centuries ago some authoritarians raised similar concerns over printing presses.
I'm pretty clearly not advocating for a "smash the presses" approach here.
> And copyright is an entirely separate issue.
It is related, and a model worth considering as it arose out of the last technical breakthrough in this area (the printing press, mass copying of the written word).
Your disagreement is meaningless because it's not grounded in any real understanding of US Constitutional law and you clearly haven't thought things through. What is generative AI? Please provide a strict legal definition which complies with the vagueness doctrine. Is an if/then statement with a random number generator generative AI? How about the ELIZA AI psychology program from 1964? And you'll also have to explain how your proposal squares with centuries of Supreme Court decisions on compelled speech.
> What are the first amendment issues? Attribution clearly can be required for some forms of speech, it's why every political ad on TV carries an attribution blurb.
I'm not sure this is the best comparison. The government can regulate the speech of government employees. Presumably it can do so for candidates working in capacity to get a government role.
> The burden of proof should probably lie upon whatever party would initiate legal action. I am not a lawyer, and won't speculate further on how that looks.
You're proposing a law. How does it work?
Who even initiates the proceeding? For copyright this is generally the owner of the copyrighted work alleged to be infringed. For AI-generated works that isn't any specific party, so it would presumably be the government.
But how is the government, or anyone, supposed to prove this? The reason you want it to be labeled is for the cases where you can't tell. If you could tell you wouldn't need it to be labeled, and anyone who wants to avoid labeling it could do so only in the cases where it's hard to prove, which are the only cases where it would be of any value.
> Who even initiates the proceeding? For copyright this is generally the owner of the copyrighted work alleged to be infringed. For AI-generated works that isn't any specific party, so it would presumably be the government.
This is the most obvious problem, yes. Consumer protection agencies seem like the most obvious candidate. I have already admitted I am not a lawyer, but this really does not seem like an intractable problem to me.
> The reason you want it to be labeled is for the cases where you can't tell.
This is actually _not_ the most important use case, to me. This functionality seems most useful in the near future when we will be inundated with generative content. In that future, the ability to filter actual human content from the sea of AI blather, or to have specific spaces that are human-only, seems quite valuable.
> But how is the government, or anyone, supposed to prove this?
Consumer protection agencies have broad investigative powers. If corporations or organizations are spamming out generative content without attribution it doesn't seem particularly difficult to detect, prove, and sanction that.
This kind of regulatory regime that falls more heavily on large (and financially resourceful) actors seems far preferable to the "register and thoroughly test advanced models" (aka regulatory capture) approach that is currently being rolled out.
> This functionality seems most useful in the near future when we will be inundated with generative content. In that future, the ability to filter actual human content from the sea of AI blather, or to have specific spaces that are human-only, seems quite valuable.
But then why do you need any new laws at all? We already have laws against false advertising and breach of contract. If you want to declare that a space is exclusively human-generated content, what stops you from doing this under the existing laws?
> Consumer protection agencies have broad investigative powers. If corporations or organizations are spamming out generative content without attribution it doesn't seem particularly difficult to detect, prove, and sanction that.
Companies already do this with human foreign workers in countries with cheap labor. The domestic company would show an invoice from a foreign contractor that may even employ some number of human workers, even if the bulk of the content is machine-generated. In order to prove it you would need some way of distinguishing machine-generated content, which if you had it would make the law irrelevant.
> This kind of regulatory regime that falls more heavily on large (and financially resourceful) actors seems far preferable to the "register and thoroughly test advanced models" (aka regulatory capture) approach that is currently being rolled out.
Doing nothing can be better than doing either of two things that are both worse than nothing.
> But then why do you need any new laws at all? We already have laws against false advertising and breach of contract.
My preference would be for generative content to be disclosed as such. I am aware of no law that does this.
Why did we pass the FFDCA for disclosures of what's in our food? Because the natural path that competition would lead us down would require no such disclosure, so false advertising laws would provide no protection. We (politically) decided it was in the public interest for such things to be known.
It seems inevitable to me that without some sort affirmative disclosure, generative AI will follow the same path. It'll just get mixed into everything we consume online, with no way for us to avoid that.
> Companies already do this with human foreign workers in countries with cheap labor. The domestic company would show an invoice from a foreign contractor that may even employ some number of human workers, even if the bulk of the content is machine-generated.
You are saying here that some companies would break the law and attempt various reputation-laundering schemes to circumvent it. That does seem likely; I am not as convinced as you that it would work well.
> Doing nothing can be better than doing either of two things that are both worse than nothing.
Agreed. However, I am not optimistic that doing nothing will be considered acceptable by the general public, especially once the effects of generative AI are felt in force.
> My preference would be for generative content to be disclosed as such. I am aware of no law that does this.
What you asked for was a space without generative content. If you had a space where generative content is labeled but not restricted in any way (e.g. there are no tools to hide it) then it wouldn't be that. If the space itself does wish to restrict generative content then why can't you have that right now?
> Why did we pass the FFDCA for disclosures of what's in our food?
Because we know how to test it to see if the disclosures are accurate but those tests aren't cost effective for most consumers, so the label provides useful information and can be meaningfully enforced.
> It seems inevitable to me that without some sort affirmative disclosure, generative AI will follow the same path. It'll just get mixed into everything we consume online, with no way for us to avoid that.
This will happen regardless of disclosure unless it's prohibited, and even then people will just lie about it because there is an incentive to do so and it's hard to detect.
> You are saying here that some companies would break the law and attempt various reputation-laundering schemes to circumvent it. That does seem likely; I am not as convinced as you that it would work well.
It will be a technical battle between companies that don't want it on their service and try to detect it against spammers who want to spam. The effectiveness of a law would be directly related to what it would take for the government to prove that someone is violating it, but what are they going to use to do that at scale which the service itself can't?
> I am not optimistic that doing nothing will be considered acceptable by the general public, especially once the effects of generative AI are felt in force.
So you're proposing something which is useless but mostly harmless to satisfy demand for Something Must Be Done. That's fine, but I still wouldn't expect it to be very effective.
"Someone else will figure that out" isn't a valid response when the question is whether or not something is any good, because to know if it's any good you need to know what it actually does. Retreating into "nothing is ever perfect" is just an excuse for doing something worse instead of something better because no one can be bothered, and is how we get so many terrible laws.
you have so profoundly misinterpreted my comment that I call into question whether you actually read it or not.
One of the best descriptions I've seen on HN is this.
Too many technical people think of the law as executable code and if you can find a gap in it, then you can get away with things on a technicality. That's not how the law works (spirit vs letter).
In truth, lots of things in the world aren't perfectly defined and the law deals with them just fine. One such example is the reasonable person standard.
> As a legal fiction,[3] the "reasonable person" is not an average person or a typical person, leading to great difficulties in applying the concept in some criminal cases, especially in regard to the partial defence of provocation.[7] The standard also holds that each person owes a duty to behave as a reasonable person would under the same or similar circumstances.[8][9] While the specific circumstances of each case will require varying kinds of conduct and degrees of care, the reasonable person standard undergoes no variation itself.[10][11] The "reasonable person" construct can be found applied in many areas of the law. The standard performs a crucial role in determining negligence in both criminal law—that is, criminal negligence—and tort law.
> The standard is also used in contract law,[12] to determine contractual intent, or (when there is a duty of care) whether there has been a breach of the standard of care. The intent of a party can be determined by examining the understanding of a reasonable person, after consideration is given to all relevant circumstances of the case including the negotiations, any practices the parties have established between themselves, usages and any subsequent conduct of the parties.[13]
> The standard does not exist independently of other circumstances within a case that could affect an individual's judgement.
Pay close attention to this piece
> or (when there is a duty of care) whether there has been a breach of the standard of care.
One could argue that because standard of care cannot ever be perfectly defined it cannot be regulated via law. One would be wrong, just as one would be wrong attempting to make that argument for why AI shouldn't be regulated.
> you have so profoundly misinterpreted my comment that I call into question whether you actually read it or not.
You are expressing a position which is both common and disingenuous.
> Too many technical people think of the law as executable code and if you can find a gap in it, then you can get away with things on a technicality. That's not how the law works (spirit vs letter).
The government passes a law that applies a different rule to cars than trucks and then someone has to decide if the Chevrolet El Camino is a car or a truck. The inevitability of these distinctions is a weak excuse for being unable to answer basic questions about what you're proposing. The law is going to classify the vehicle as one thing or the other and if someone asks you the question you should be able to answer it just as a judge would be expected to answer it.
Which is a necessary incident to evaluating what a law does. If it's a car and vehicles classified as trucks have to pay a higher registration fee because they do more damage to the road, you have a way to skirt the intent of the law. If it's a truck and vehicles classified as trucks have to meet a more lax emissions standard, or having a medium-sized vehicle classified as a truck allows a manufacturer to sell more large trucks while keeping their average fuel economy below the regulatory threshold, you have a way to skirt the intent of the law.
Obviously this matters if you're trying to evaluate whether the law will be effective -- if there is an obvious means to skirt the intent of the law, it won't be. And so saying that the judge will figure it out is a fraud, because in actual fact the judge will have to do one thing or the other and what the judge does will determine whether the law is effective for a given purpose.
You can have all the "reasonable person" standards you want, but if you cannot answer what a "reasonable person" would do in a specific scenario under the law you propose, you are presumed to be punting because you know there is no "reasonable" answer.
Toll roads charge vehicles based upon the number of axles they have.
In other words, you made my point for me. The law is much better than you at doing this, they've literally been doing it for hundreds of years. It's not the impossible task you imagine it to be.
> You can have all the "reasonable person" standards you want, but if you cannot answer what a "reasonable person" would do in a specific scenario under the law you propose, you are presumed to be punting because you know there is no "reasonable" answer.
uhhh......
To quote:
> The reasonable person standard is by no means democratic in its scope; it is, contrary to popular conception, intentionally distinct from that of the "average person," who is not necessarily guaranteed to always be reasonable.
You should read up on this idea a bit before posting further, you've made assumptions that are not true.
> Toll roads charge vehicles based upon the number of axles they have.
So now you've proposed an entirely different kind of law because considering what happens in the application of the original one revealed an issue. Maybe doing this is actually beneficial.
> The law is much better than you at doing this, they've literally been doing it for hundreds of years. It's not the impossible task you imagine it to be.
Judges are not empowered to replace vehicle registration fees or CAFE standards with toll roads even if the original rules are problematic or fail to achieve their intended purpose. You have to go back to the legislature for that, who would have been better to choose differently to begin with, which is only possible if you think through the implications of what you're proposing, which is my point.
> Will potatoes grown on Mars have a "marsy" taste to them?
Probably will depend on whether there is a difference you can taste from earthy to "marsy" potatoes. Some experimental trials [1] have indicated that growing potatoes on Mars will be difficult, but possible.
Google is telling me Geosmin is the chemical typically associated with earthy odor and taste [2]. The mars regolith is apparently quite salty, so it's completely possible that "marsy" taste may become associated with some similar common chemical product of martian agriculture?
Using grep, it looks like they mostly use unsafe code to interface with existing C libraries. In those cases it's roughly as tricky as writing C in the first place IMO, so still probably a net win.
> the kind of app where you want to carefully audit the code line by line
This should be much less necessary for Safe Rust. The existing C sudo needs libpcre2, and OpenSSL neither of which are small - among other dependencies.
Some of these dependencies do use unsafe Rust in places, and so it's valuable that those places should be inspected carefully (and not only for sudo) - but many do not, humantime for example is entirely safe Rust. Is it possible it has a logic error of some sort? Yes. Is it likely it somehow introduces a security hole? Not really. A C equivalent could easily introduce a critical buffer overflow, use after free or similar but that's not possible in safe Rust.
sudo doesn't strictly need OpenSSL. That dependency is part of it's log server client implementation, and it's also available for the plugin system.
I had no idea sudo even had the need for plugins.
Which raises the question, maybe there's a need for two different sudo implementations. One that provides the simplest possible implementation of the feature, and another one that provides fancy log server and plugin integrations.
For something like this, I think I would actually prefer that they copied existing code for hashing. It's simple and stable enough to avoid taking a dependency.
It's harder to package if you're using Cargo. Using the sha2 crate is one line. Copying the code into your project is a ton more work.
Ease of auditing is debatable. Using shared popular libraries gives the benefit of lots of people using them.
Plus actual code audits are very rare and of dubious value. They're mostly useful for finding out how well written the code is rather than finding bugs. For that your basically want fuzzing.
Spoiler alert: rotors are mechanically identical to quaternions, while being easier to understand. If you understand rotors, you understand quaternions. You can fit the laws you need to understand rotors on a business card.
Plus, rotors abstract to higher and lower (well, there's only one plane and its two respective orientations in 2d, but still) dimensions.
Complex numbers as planes (bivectors in GA parlance) has been the most mind-opening mathematical concept I've been exposed to in the last decade. The associated geometric product has helped me better understand concepts (like "handedness") that troubled me during undergrad engineering.
1. https://marctenbosch.com/quaternions/
reply