You're reusing some components from rust-analyzer, so is there any reason you didn't go with a more query-based/demand-driven architecture, e.g. by also using salsa?
I note in the caveats that the current approach is to recompute everything whenever even one file is changed. But that’s probably not sustainable, as I admit myself in https://azdavis.net/posts/pl-idea-tooling/ :
> As an example, it would be unsustainable if, every time we changed a single function’s body, the language server had to re-typecheck the entire codebase. This might work for an initial proof-of-concept on a small codebase, but for large ones, the responsiveness of the language server would drop precipitously.
>> As an example, it would be unsustainable if, every time we changed a single function’s body, the language server had to re-typecheck the entire codebase.
If only there were a type system[1] where you could reinfer the function’s type in isolation and then recheck all uses!
(I don’t think Trevor Jim’s work on System P2 and System P received any further development, even in “better SML” work like 1ML, which seems like a shame, so I’m going to shill it wherever it appears relevant.)
Thank you! I’ve tried to make sure that my site reads well and loads fast, even on mobile.
There’s actually no JS on my site at load time, I run it all as a build step that transforms Markdown with inline LaTeX into pure HTML and CSS. https://github.com/azdavis/azdavis.net
No comments on the writing, but one thing to add: the syntax highlights for code get lost a bit. I would see them better either if they were a lighter black (aka dark grey), or had a different background than the rest of the text.
Do you mean the inline code or code blocks? In any case, I tweaked the background to be more different from the page background (in both light and dark mode) and also gave inline code that same background. Does that help?
https://github.com/azdavis/azdavis.net/commit/55199593abfb6e...
Nothing to add to the other comments, just wanted to add another data point that the website (on mobile) was perfectly clear and the information was all laid out and explained very well; I have no (constructive) criticism to give.
Great post! I was going to share this one on the r/functionalprogramming subreddit when I found yours. A lot easier to read, and helps reading the Wikiversity afterwards. Thanks!
The definition I gave at the end of the article (the thing starting with `t ::=`) is basically a context-free grammar describing the syntax for terms of the calculus of constructions.
`t ::= ...` is saying "A CoC term, called t, can have the following forms ..." Then each case is separated by `|`.
The cases for application, abstraction, and forall are recursive, in that they contain t (or t') when the thing being defined is also called t. (The prime on t' is just to say that t and t' can be different terms.)
So for instance, the case for application is:
t(t')
That means if you already have two arbitrary CoC terms, called t and t', then if you write down first the term t, then a `(`, then the term t', then a `)`, then you've written down another CoC term representing the application of the term t to the term t'.
And the case for abstraction is
λ(x:t) t'
That means if you have two CoC terms t and t', then if you write down `λ`, then `(`, then a variable name (x is kind of a meta-variable-placeholder name, you can name the variable whatever you want), then `:`, then the term t, then `)`, then the term t', then you've written down an abstraction term. In this case, the variable x (or whatever you called it) has type t, and can appear free in t'.
When I was learning C a couple decades ago, pointer syntax never made any sense to me until something suddenly clicked with “put it directly next to the type name and not next to the variable name”. Ever since then I have not been able to figure out why it is commonly taught next to the variable name.
I mean, short of enabling declarations like:
char *a, *b;
But I have long since found the tradeoff to be worth the syntactic clarity.
The best argument against this, and for leaving the * by the variable name, is this declaration:
char* a, b;
Now a has type char * but b is just char. It’s probably not what the author meant and it’s definitely ambiguous even if it was intentional. Better to write:
char b, *a;
Or, if you meant it this way:
char *a, *b;
“Well, don’t declare multiple variables on the same line,” you respond. Sure, that’s good advice too. But in mixed, undisciplined, or just old code, it’s an easy trap to fall into.
apply the char* type to both? (That is, why didn't they design it that way?)
I assume there was some reason originally, but it's made everything a bit more confusing ever since for a lot of people. :/
Edit: Apparently it's so declaration mirrors use. Not a good enough reason IMO. But plenty of languages have warts and bad choices that get brought forth. I'm a Perl dev, so I speak from experience (even if I think it's not nearly as bad as most people make out).
In the olden days of C, pointers were not considered types on their own (you cannot have just a pointer, a pointer must point ‘to’ something, grammatically speaking). The type of the object of that declaration is a char. So it’s not really read as ‘I’m declaring a char-pointer called a’, it’s more along the lines of ‘I’m declaring an unnamed char, which will be accessed when one dereferences a’. Hence the * for dereferencing.
> Ever since then I have not been able to figure out why it is commonly taught next to the variable name.
C does this very cute (read: horrifyingly unintuitive) thing where the type reads how it's used. So "char ⋆a" is written so because "⋆a" is a "char", i.e. pointer declarations mimic dereferencing, and similarly, function pointer declarations mimic function application, and array declarations mimic indexing into the array.
. Some people (me included) find it clearer to write the type, followed by the variable name. So just as you’d write
int a
, you’d write
char* a
.
The fly in this ointment is that C type syntax doesn’t want to work that way. It’s designed to make the definition of the variable look like the use of the variable. A clever idea, but unlike nearly every other language, which BTW is why I think you should really use typedefs for any type at all complicated in C.
For example, the type-then-variable style falls down if you need to declare an array
int foo[4]
or a pointer to a function returning a pointer to an int
int *(*a)(void)
(...right?).
So I’m perfectly willing to do it the “C way”, I just find out more readable to do it the other way unless it just won’t work (and then prefer to use typedefs to make it work anyway).
In my own projects, I like to put the * on its own, like so:
int const * const i;
This is nice because you can naturally read the signature from right to left. "i" is a constant pointer to a constant integer. It's a little unconventional, but I think it's a really clear way to convey the types.
I find it to be clear to have the * with the type. Otherwise it can be confused (not by the compiler but the reader) that you are dereferencing the variable a or b.
declares a variable, `a` that, when dereferenced using the `*` operator, will yield an int.
In C++, the same line declares a variable, `a`, of type `pointer-to-int`.
C cuddles the asterisk up to the variable name to reflect use. C++ cuddles it up to the type because it's a part of the type. Opinions don't really differ on whether C-style or C++-style is better, but a lot of cargo-cult programmers don't bother adjusting the style of the code snippet they paste out of Stack Exchange so you see a lot of mixtures.
Is it equivalent? Because that seems to be the only difference between the two editions for the newprint function, and the author says one of them happens to compile.
The difference is the function prototype `void newprint(char *, int);` at the start, which is missing in the second example. With the forward declaration, the compiler knows what arguments newprint takes and errors out if you pass something else. C is compiled top to bottom so in the older version of the example the compiler has no way of knowing what number of arguments the function takes at the point whree it is called. In (not so) old versions of C that implicitly declared a function taking whatever you passed it.
The title seems innocent enough, but if I understand this correctly, this purports to be a proof of P != NP.
I'm not familiar with a lot of the stuff talked about in the paper, but it's still exciting to see another attempt at the P ?= NP problem. Of course, purported proofs (for both P = NP and P != NP) have been published before[0].
Another similar book is Harper's Practical Foundations for Programming Languages[1]. I had the privilege of taking a course[2] about this kind of stuff with Prof. Harper at CMU.
> For whatever reason I've never heard wakaremasen (can't understand) but only don't understand (wakarimasen).
One of my teachers explained to me that "wakaru" (to understand) already has the implicit meaning of "able to" in the word itself, so you should never use the potential form. Searches[1][2] for "分かる 可能形" seem to agree.
Do Stripe products make affordances for people who are color-blind? For instance, do "error" notifications always contain an icon for "error" as well as a red background/text color? The "Differentiate without color" option on macOS and iOS comes to mind.
Yes, it's part of our internal guidelines for color usage in product interfaces that color should never be the only visual means of conveying information. There's a lot of different ways that can be applied in different contexts, but it's something we discuss fairly often as a team when reviewing designs.
> The "Differentiate without color" option on macOS and iOS comes to mind.
As far as I can tell, that historically has affected precisely one thing on macOS, and that's the "Available/Busy/Away" indicator in Messages (which is now gone, so I guess it does nothing now…)