The federal government "exercise[s] exclusive Legislation in all Cases whatsoever, over such District (not exceeding ten Miles square) as may, by Cession of particular States, and the Acceptance of Congress, become the Seat of the Government of the United States."
I'm not trying to say anything. I said and meant exactly what I said. No more, no less. Your logic is obviously flawed. There is nothing preventing that optimization in the presence of a forged pointer in bar().
Either there is no provenance, forging is allowed and the optimization is disallowed; or there is provenance and forging the pointer and attempting to inspect (or modify) the value of *ptr in bar() is UB.
You never converted ptr to an integer. If you did, if the pointer escapes, yes, I claim that then the allocation can't be optimized away. Why is that so bad?
Streaming is not viable in the vast majority of the country. Just because it's available to the vast majority of the population doesn't mean that the minority who live in rural areas don't count.
didn't down-vote you but perhaps you mean tech which holds nearly all client state on the server like JSF or webforms and I think that may be not so clear to some :)
for front-end frameworks, not storing the state on the URL usually means storing it on the memory or sessionstorage and server is usually not involved
I think NFC is the agreed-upon normalization form, is it not? The only real exception I can think of is HFS+ but that was corrected in APFS (which uses NFC now like the rest of the world).
It's slow because the borrow checker is NP complete. LLVM may or may not generate slower code than GCC would for rustc, but I doubt it's anywhere close to the primary cause of the lack of snappy.
You're wrong it's been debunked that the borrow checker is any appreciable part of the compile time - Steve Klabnik actually verified it on here somewhere.
I say “usually” because of course sometimes bugs happen and of course you can conduct degenerate stress tests. But outside of those edge cases, it’s not an issue. If it were, blog posts that talk about lowering compile times would be discussing avoiding the borrow checker to get better times, but they never do. It’s always other things.
Is there any tool for Rust that does profiling that detects what part of compilation time is caused by what? Like, a tool that reports:
- Parsing: x ms
- Type checking: y ms
- LLVM IR generation: z ms
And have there been any statistics done on that across open-source projects, like mean, median, percentiles and so on?
I am asking because it should depend a lot on each project what is costly in compile time, making it more difficult to analyse. And I am also curious about how many projects are covered by "edge cases", if it is 1%, 0.1%, 0.01%, and so on.
The post my original comment is on discusses doing this at length.
> And have there been any statistics done on that across open-source projects, like mean, median, percentiles and so on?
I am not aware of any. But in all the posts on this topic over the years, codegen always ends up being half the time. It’s why cargo check is built the way it is, and why it’s always faster than a full build. If non-codegen factors were significant with any regularity, you’d be seeing reports of check being super slow compared to build.
I have actually seen a few posts here and there of 'cargo check' being slow. I have also heard of complaints of rust-analyzer being slow, though rust-analyzer may be doing more than just 'cargo check'.
>Triage notes (AFAIUI): #132625 is merged, but the compile time is not fully clawed back as #132625 is a compromise between (full) soundness and performance in favor of a full revert (full revert would bring back more soundness problems AFAICT)
Update: They fixed it, but another issue surfaced, possibly related, as I read the comments.
I do not know if the borrow checker and Rust's type system have been formalized. There are stacked borrows and tree borrows, and other languages experimenting with features similar to borrow checking, but without formal algorithms like Algorithm W or J for Hindley-Milner, or formalizations like of Hindley-Milner and problems like typability for them, I am not sure how one can prove the complexity class of the borrow checking problem nor a specific algorithm, like https://link.springer.com/chapter/10.1007/3-540-52590-4_50 does for ML.
I also am not sure how to figure out if the complexity class of some kind of borrow checking has something to do with the exponential compile times of some practical Rust projects after they upgraded compiler version, for instance in https://github.com/rust-lang/rust/issues/75992 .
It would be good if there was a formal description of at least one borrow checking algorithm as well as the borrow checking "problem", and maybe also analysis of the complexity class of the problem.
There isn't a formal definition of how the borrow checking algorithm works, but if anyone is interested, [0] is a fairly detailed if not mathematically rigorous description of how the current non-lexical lifetime algorithm works.
The upcoming Polonius borrow checking algorithm was prototyped using Datalog, which is a logical programming language. So the source code of the prototype [1] effectively is a formal definition. However, I don't think that the version which is in the compiler now exactly matches this early prototype.
EDIT: to be clear, there is a polonius implementation in the rust compiler, but you need to use '-Zpolonius=next' flag on a nightly rust compiler to access it.
Interesting. The change to sets of loans is interesting. Datalog, related to Prolog, is not a language family I have a lot of experience with, only a smidgen. They use some kind of solving as I recall, and are great at certain types of problems and explorative programming. Analyzing the performance of them is not always easy, but they are also often used for problems that already are exponential.
>I recommend watching the video @nerditation linked. I believe Amanda mentioned somewhere that Polonius is 5000x slower than the existing borrow-checker; IIRC the plan isn't to use Polonius instead of NLL, but rather use NLL and kick off Polonius for certain failure cases.
That slowdown might be temporary, as it is optimized over time, if I had to guess, since otherwise there might then be two solvers in compilers for Rust. It would be line with some other languages if the worst-case complexity class is something exponential.
> IRC the plan isn't to use Polonius instead of NLL, but rather use NLL and kick off Polonius for certain failure cases.
Indeed. Based on the last comment on the tracking issue [0], it looks like they have not figured out whether they will be able to optimize Polonius enough before stabilization, or if they will try non-lexical lifetimes first.
You use "conspiratorial" as though Enrico is the one being conspiratorial. That's obviously not the case as far as what's publicly known. With whom is he conspiring? And against whom?
Let me rephrase, since saying they lose their safe harbor was a poor choice of words. The safe harbor does indeed prevent them from being treated as the publisher of the illegal content. However illegal content can incur liability for acts other than publishing or distributing and section 230's safe harbor won't protect them from that.
The federal government "exercise[s] exclusive Legislation in all Cases whatsoever, over such District (not exceeding ten Miles square) as may, by Cession of particular States, and the Acceptance of Congress, become the Seat of the Government of the United States."
reply