Hacker News new | past | comments | ask | show | jobs | submit | more maxiepoo's comments login

It doesn't have to be one or the other :). OCaml and Haskell are two great languages that the Rust designers were familiar with.

The big thing IMO that makes Rust feel like Haskell is the pervasive use of traits and `#[derive(...)]` which is directly analogous to the pervasive use of typeclasses and `deriving` in Haskell.


For context, Matt Might is formerly a professor whose research was in programming languages/program analysis. Professors like to code too!


I’ve really missed his blog. I read his posts over and over during grad school.

His story of pivoting from PL research to medical research (and being a star in the field) is quite motivating.


Thank you for the kind words!

I missed blogging too, so I set a New Year’s resolution in January to write at least two blog posts this year. It had been over seven years since my last!

I still hadn’t written one by November, but now I’ve got three for the year.

I’m optimistic I can keep it up now.


I've asked my team at Amazon [usual disclaimers] to read this, and highlighted my favorite takeaways: "Well-timed sleep was probably the most important thing I did", and "Using sleep to do the heavy lifting on algorithm design". I love this concept, and need to incorporate more of it myself.


I have been reading Matt' blog for quite long too. I read his post on his son condition but never really noticed if he really switch his academic careers for his son. I am pasting the really motivating video sharing the story [1] and a NY times article here [2].

[1] https://www.facebook.com/freethinksuperhuman/videos/15306506...

[2] https://archive.is/l7pCA


I stopped reading his blog when we wrote about how to get tenure and completely omitted talking about all of his wife's stay-at-home work.


Can you explain why? What relevance does his wife's stay at home work have to someone else trying to get tenure? Do you think he doesn't appreciate his wife and that rubs you the wrong way?


I think OP is really overstating things. "the definitive underlying field of math replacing set theory" is something Lawvere was trying in the 60s and it never really took off.

Category theory isn't really a replacement for set theory any more than order theory is a replacement for set theory. What is important is not what your "foundational system" is, but that just like there are many concepts we learn in the context of set theory but are just very generally useful (equivalence classes, cardinality, injective/surjective functions), there are also similarly very useful concepts in category theory like adjoint functors, limits and universal mapping properties.


I think the concept you are looking for is a ["torsor"](https://en.wikipedia.org/wiki/Principal_homogeneous_space).

Basically,

1. You have a 0 time delta, and you can add and subtract them satisfying some natural equations. (time deltas form a group)

2. You can add time deltas to a datetime to get a new datetime, and this satisfies some natural equations relating to adding time deltas to each other (time deltas act on datetimes).

3. You can subtract two datetimes to get a time delta satisfying some more natural equations (the action is free and transitive).


The term I was looking for was affine structure, as I commented to someone else. But from your link, which I can't understand entirely, I get the sense that a torsor is an even bigger generalization.


> The term I was looking for was affine structure, as I commented to someone else. But from your link, which I can't understand entirely, I get the sense that a torsor is an even bigger generalization.

An affine space is a torsor under a vector space, and you can have instead a torsor under any group. This loses a bit of structure, in the sense that you can take convex combinations in an affine space but not in an arbitrary torsor; but otherwise it is a proper generalisation. But the convex combination $(a + b)/2$ used to obtain a midpoint is exactly what we want here!


Indeed, torsors have exactly the properties you describe, but notably not the ability to find the midpoint between two points (that would involve extracting square roots in a group, which is not guaranteed possible, or uniquely defined when possible).


And in fact finding the midpoint is not possible half the time in the space we're interested in (https://news.ycombinator.com/edit?id=33497270). So what is the algebraic structure that underlies the binary-search algorithm, since evidently it isn't really the torsor of a group?


> So what is the algebraic structure that underlies the binary-search algorithm, since evidently it isn't really the torsor of a group?

Though it pains me to say so as an algebraist, I think that it probably just isn't a problem most usefully modelled with a more abstract algebraic structure. Although it would be easy to cook up a structure permitting "division with rounding" … maybe a Euclidean domain (https://en.wikipedia.org/wiki/Euclidean_domain) is something like the right structure?


I'm not an algebraist! So maybe what I'm about to say is dumb:

As I understand it, the correctness of any correct algorithm is conditioned on some requirements about the data types it's operating on and the operations applicable to it. Once you formalize that set of requirements, you can apply the algorithm (correctly) to any data type whose operations fulfill them.

But some sets of data and some operations on them that fulfill some formally stated requirements are just an abstract algebra, aren't they? Like, if you have two data types {F, V} and operations on them {+, ×, ·, +⃗} that fulfill the vector-space axioms, they're a vector space, so any vector-space algorithm will work on them. So every algorithm (or, more accurately, every theorem about an algorithm) defines some algebraic structure (or several of them), which may or may not be a well-known one.

For binary search you have two sets: the set of indices I, which is what we've been talking about, and the set E of elements that might occur in the array a you're searching through. You need a total order on E, and I think you need a total order on I, and the array a needs to be sorted such that aᵢaⱼ if i < j (though not conversely, since it might contain duplicates). You need a midpoint operation mid(i, j) to compute a new index from any two existing indices such that i ≤ mid(i, j) < j if i < j. And "mid" needs a sort of well-foundedness property on I to guarantee that the recursion eventually terminates; for any subset of ℤ you can define a "mid" that fulfills this, but for example in ℚ or ℝ you cannot, at least not with the usual ordering.

I don't think a Euclidean domain is quite the right thing for I because there's no obvious total order, and I think you need a total order to state the sortedness precondition on the array contents. Also, lots of Euclidean domains (like ℚ or ℝ) are dense and therefore could lead you to nonterminating recursion. And it isn't obvious to me that "mid" requires so much structure from I.

If you care about complexity you also have to define the cost of operations like ≤, mid, and array indexing, so your algorithm is no longer defined on just an abstract algebra, but an abstract algebra augmented with a cost model. But for correctness and termination that isn't required.


> But some sets of data and some operations on them that fulfill some formally stated requirements are just an abstract algebra, aren't they?

Not quite. A variety of algebras (which is usually what people have in mind when they talk about "algebraic structures" in general) is a collection of operations with equational laws, meaning that they're of the form "for all x_0, x_1, ... (however many variables you need), expression_1(x_0, x_1, ...) = expression_2(x_0, x_1, ...)", where the expressions are built up out of your operators.

Fields are the classic example of a structure studied by algebraists which is nonetheless not a variety of algebras: division isn't described by equational laws, because it's defined for everything except zero. This makes fields much harder to work with than e.g. groups or rings, in both math and programming.


Interesting, I never realized that fields weren't algebras!


> Interesting, I never realized that fields weren't algebras!

One has to be quite careful with the terminology when crossing disciplines, as here. Fields are, rather boringly, algebras over themselves, in what I dare to call the ‘usual’ sense of an algebra over a field (https://en.wikipedia.org/wiki/Algebra_over_a_field). Rather what hither_shores is saying is that the collection of fields does not form a variety of algebras, in the sense of universal algebra https://en.wikipedia.org/wiki/Variety_(universal_algebra) (which is itself quite different from the algebraic-geometry sense https://en.wikipedia.org/wiki/Algebraic_variety). See, for example, https://math.stackexchange.com/questions/3756136/why-is-the-... .


Ouch! Thanks for clarifying (at least, I hope it will turn out to be clarifying after I read some more...)


> I don't think a Euclidean domain is quite the right thing for I because there's no obvious total order, and I think you need a total order to state the sortedness precondition on the array contents. Also, lots of Euclidean domains (like ℚ or ℝ) are dense and therefore could lead you to nonterminating recursion. And it isn't obvious to me that "mid" requires so much structure from I.

I think you may be confusing integral domains (https://en.wikipedia.org/wiki/Integral_domain) with Euclidean domains (https://en.wikipedia.org/wiki/Euclidean_domain). Although any field is a Euclidean domain, the structure is not very useful (what I am used to calling the norm, but Wikipedia calls the Euclidean function, may be taken to be identically 0, and the remainder from every division may also be taken to be 0). But anyway, I was just free associating (no pun (https://en.wikipedia.org/wiki/Associative_property) intended).

I certainly agree that one can create a structure that encodes the algebra of binary search, and, at a casual glance, your definition looks good to me. I meant only that such a structure seems unlikely to do much more than to encode binary search (unlike, say, affine spaces, which are useful in a very broad variety of settings for which they have not been purpose-built) … although of course any mathematical structure, howsoever abstract or specialised, will find other uses if enough people get interested in it.


well, it'd be nice if it turned out that it was a consequence of some other well-known structure that was more general than just 'subsets of the integers'. but maybe it isn't.


I have to plug my favorite "explanation" of the Y combinator, which is called "Lawvere's fixed point theorem".

In a cartesian closed category, if there is a function p : A -> (A -> B) that has a left-inverse, i.e., e : (A -> B) -> A with p o e = id_(A -> B) then B has a fixed point combinator Y : (B -> B) -> B, i.e., every function from B to itself has a fixed point: Y(f) = f(Y(f)).

The proof is like the Y combinator in untyped lambda calculus, but with extra ps and es. This is because one way to understand untyped lambda calculus is that you are working with a single type D that satisfies D = (D -> D) (so p,e are identities).

I like this because it makes the connection between the Y combinator and diagonalization arguments like Cantor's theorem explicit.

For instance, Cantor's theorem says that the powerset of a set always has higher cardinality. We can reformulate this as saying that there is no onto function p : X -> (X -> 2) because a subset of X is the same as a function from X to booleans (2 = {true, false}). An onto function has a left inverse (using axiom of choice) and so if we have an onto function X -> (X -> 2) then every function (2 -> 2) has a fixed point, but this isn't true! If we took Y(not) we would construct a boolean b : 2 such that b = not(b), which is impossible so we have a contradiction. I found this helpful in understanding why the "not" appears in just the right place in the classical proof.


> if there is a function p : A -> (A -> B) that has a left-inverse, i.e., e : (A -> B) -> A with p o e = id_(A -> B) then B has a fixed point combinator

I think you meant to write "right inverse" here.

Agreed that Lawvere's theorem is very cool. It helps that Cartesian closed categories are basically just categories where you can do typed lambda calculus, so the underlying link is already very close.


"Just Wrong" is a bit harsh. Monads are there to simulate side effects within the language, as opposed to the side effects that are built in to the language.

WARNING: the below contains math. Ignore if math makes you mad.

If "side effects" means the availability of certain operations such as mutating state, throwing errors etc, then monads certainly do have something to do with side effects in that these operations can often be described by an algebraic theory and algebraic theories have a direct correspondence to certain kinds of monads (the monad of free algebras of the theory).


No, they aren't. This is like saying "Iterators are in the language to walk over hash tables." No, they aren't. It is one of the things they can do, but it isn't what they are there for. What they are there for is to present a sequence of values. They aren't there to walk over trees, or walk over hash tables, or indeed, walk over data structures at all. Even if you think that last one, you don't understand them, because there are iterators that don't involve data structures at all.

You'll be stuck with a wrong understanding of them as long as you think that's what they are. That is one of the things they can be used for, but that's all. It isn't what they are. There are plenty of "monads" that are not being used for "side effects" and are perfectly pure no matter how you look at them, such as the list instance of monad. Rather than arguing with me about how if you bend the list a bit and sort of redefine "side effect" and if you squint a bit, the list instance really is related to "side effects", just don't look at it that way.


Not sure about unloading the dishwasher but unpacking was pretty fun: https://www.unpackinggame.com/


Unpacking was fun not because of the mundane chore but b/c the chore was telling a story through the environment of the house and the items unpacked.

What a beautiful game


Right. When you move in to the boyfriend's place and there's not really enough space for what you have, and so your art stuff must be "tidied away" out of sight I was like "Oh - oh no..." and then IIRC the next move is back to your childhood bedroom. No need for words.

[Although if you do want some words, those words are "Fuck Off Fund"]


I was watching the promo video thinking this was a very wholesome game, until I saw the electric toaster sitting in the bath :)


Sure but it's worth emphasizing that Go language designers gave these built-in collections the exorbitant privilege of being parameterized by a type and declaring that any new type created by a mere programmer has to be monomorphic.


> the exorbitant privilege of being parameterized by a type

This is how built in collection types work in pretty much all statically typed languages. For example, here is an array of ints in C:

    int foo[] = { 1, 2, 3 };
We wouldn't usually say that C has 'generic arrays'.

> any new type created by a mere programmer has to be monomorphic.

Collections in (pre-Generics) Go are monomorphic. []foo and []bar are as different as foo and bar. That is why, for example, the first argument of sort.Slice has type 'any' (https://cs.opensource.google/go/go/+/go1.18.2:src/sort/slice...)


> The mushroom is an ideal food...Low in fats, sugars and calories...

It's funny to see "low calorie" as a positive for an ideal food. From the perspective of getting people enough food to live, things like grains, corn/maize and potatoes are "ideal" foods because they can cheaply get us a significant portion of our needed calories.


Ha! I came here to say the same thing. How the world has changed from 99% of our existence when humans were basically hungry all the time and calorie-density was an asset not a liability!


This is not really the case though. Humans in the past, even hunter-foragers (so therefore humans for most of human history) typically had an abundance of food and didn't have to work hard for it. We just destroyed all the food-rich ecosystems that used to sustain us before agriculture became widespread.


That's making the huge assumption that high calorie density is the only and most important thing (its not, not even close)


Wouldn't that make countries like the USA and Canada extremely far apart because they are both large countries? The distance you would get is the distance from Hawaii to somewhere on the northeastern edge of Nunavut/Newfoundland/Labrador


Not quite, you'd get the maximum distance from the extreme end of one territory to the nearest point of the other.


> Not quite, you'd get the maximum distance from the extreme end of one territory to the nearest point of the other.

But which country do you pick the extreme edge of, and which country do you pick the nearest point of? For example, if measuring US<->Canada, do you pick the extreme end of Canada or the extreme end of the US?


You pick whatever gives you the largest distance.

For every point in both territories, find the closest point in the other territory. From these pairs of points, find the pair with the largest distance. That distance is your Hausdorff distance.


Oh I see, so still pretty large since both countries are large, but if one of the bordering countries were small the distance would be small.


No, you would get the distance from the border to the other end of the large country, since that is longer.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: