Hacker Newsnew | past | comments | ask | show | jobs | submit | more smarks's commentslogin

Good reference! Turns out this internet draft has been turned into an official, informational RFC:

https://datatracker.ietf.org/doc/html/rfc9413

I don't think Postel's Law was necessarily a bad idea, in its time and context. However it seems to have fallen into usage as an argument-ender. This RFC is a useful antidote to that.


> Yes, there were a few "dumb" terminals that were simply paperless teletypes

Ah yes, the proverbial "glass TTY." There were a few years in the late 1970s where printing terminals (actual Teletypes such as the ASR 33, and matrix printing terminals like the DECwriter) were fading out and "dumb" CRT terminals came in. When I arrived at university in 1980 the computer center was filled with dumb terminals like the ADM-3. There were a few smart terminals (they could run Emacs!) but they were hard to come by.

Within a couple years, though, the dumb terminals were all gone and were replaced by "smart" CRT terminals that had the more sophisticated features. So the lifetime of the glass TTY might have been only 5-10 years.


Right, the point of being able to save a game was to pick up later where you had left off. However, the game attempted to prevent you from restarting from a saved game if you made a mistake or got killed or something.

I think the scheme was fairly clever but it wasn't terribly sophisticated; nothing like public-key crypto for instance. It did have a number of countermeasures that attempted to prevent cheating, such as restarting from a modified save file. For example, I believe it hashed all the bytes of the file, plus a bunch of metadata -- possibly including the uid and the file's i-number, as you suggest -- and also things like the file modification time and maybe even the i-node change time. Then it would write this hash back into the file. I seem to recall a bit of cleverness to avoid problems because writing the hash could potentially modify things like the file modification time. Maybe it just retried until the modtime didn't change. Not too difficult since the granularity was one second.

In principle this is easy to defeat if you know the hash algorithm, and if you knew how to use the Unix system calls to change a file's modification time. Since the original rogue program was closed source, the hash algorithm was secret. But eventually somebody reverse-engineered it, and programs emerged that were able to successfully copy save files.


Oh MANNNN I spent so much time playing rogue back in the day. I got decently good at it, but it was hard. I don't think I ever actually won. I do recall a strategy of clearing out the easier levels in order to gain treasure and magic items. At a certain point you wouldn't be able to defeat all the monsters, so you'd try to find the stairs as quickly as possible. On the occasions where I'd get really far -- far enough to start looking for the amulet! -- I'd get trapped by some powerful monster like a Titan and get killed instantly.

Such a letdown. You'd play for hours and it would all be over in an instant. That damned tombstone.


Like several others here, my first programming language was BASIC. For this we owe Kurtz a debt of gratitude.

I know Dijkstra is famous for having said that we're mentally mutilated beyond hope of regeneration, but you know, I kinda think we didn't turn out half bad.


I know literally zero working programmers who learned programming the way Dijkstra thought it should be taught — not even Dijkstra himself, as Donald Knuth once gently pointed out.

Practically everybody in my generation started off with BASIC. On the other hand, at some point (when?), this practice stopped, and the newer generations turned out fine starting out with more civilized languages.


To be fair to Dijkstra, he was writing about how he believed university students should be taught. Two years before that cruelty paper was published, I was getting my first exposure ever to computer programming when my parents bought a Commodore 64 that came with a BASIC manual that showed how to make a Pong clone. I was 6 years old.

There's maybe an analogy to riding a bike. If you're aspiring to compete in a grand tour, you probably want power meters, lactate threshold and VO2 max tests in a lab, training that is principled and somewhat scientific in the way it builds toward a goal. If you're 6, your parents just put you on the seat and push you until your balance gets good enough that you can take the training wheels off.


> If you're 6, your parents just put you on the seat and push you until your balance gets good enough that you can take the training wheels off.

Which can take a very long time, because you have training wheels to begin with. If you're about to teach a kid how to ride a bicycle, it's far better to do without them. And the pedals too; learning how to use those is a huge distraction from learning to find and hold your balance.

There are special “kick bikes” for tiny tots to propel by kicking off the ground. And some of those can later be converted into “normal” bikes by attaching a chain drive, but... Feels kludgy, and is usually rather expensive.

If you can find an ordinary bike where you can get the saddle low enough for your kid to reach the ground with their feet, just remove the pedals and let them use that as a “kick bike”. If you can find a (gentle!) slope to practice on, you'll be able to replace the pedals in a matter of a few weeks at most, or probably days.


Yes, Dijkstra was writing about the education of university students. I don't know whether he ever wrote anything about elementary school (or earlier) computer education, but I doubt he'd have approved of it, let alone of a hands on approach.


Total aside: Training wheels are a thing I remember from my youth but today (at least here) they are barely used at all anymore.

I'm still used to the phrase (taking the training wheels off) but I'm fairly certain my kids will grow up not using it.


The sort of pushbikes for littler kids lets them learn balance and steering before also having to learn how to pedal and brake.

So half the learning happens on those pushbikes before they move to real bikes.


Consider me naive, but what way did Dijkstra thought it should be taught? Someone who first learned to code in QBASIC


Other commenters are completely right to mention his concern for proofs and the "Cruelty of Really Teaching Computer Science", but the most BASIC-specific thing that he was associated with was criticism of the GOTO statement.

https://homepages.cwi.nl/~storm/teaching/reader/Dijkstra68.p...

In original BASIC, the GOTO is a foundational mechanism and a majority of programs would have used it, sometimes extensively. Dijkstra thought for many reasons that this wasn't good style and didn't promote clear thinking. And yes, one consequence of that is that it would be harder to prove programs correct or just to reason about whether they were correct.

Programs that overuse GOTOs (or from the point of view of later structured programming and functional programming advocates, perhaps programs that use GOTOs at all) were stigmatized as "spaghetti code".

https://en.wikipedia.org/wiki/Spaghetti_code

By the way, this concern is not just about aesthetics: some of the ideas that Dijkstra was advocating are arguably those that newer programming languages like Haskell and Rust can use to find bugs in code automatically at compile-time, or to make it harder to write certain bugs at all. The line between Dijkstra's advocacy and these techniques is complicated but I think there is a connection. So partly we might say that Dijkstra was not just concerned with how to make it easier for humans to think clearly about program correctness, but ultimately also about how to make it easier for computers to help humans automatically determine (parts of) program correctness. And it's true that the GOTO style complicates that task.


Kind of ironic that nowadays many people in our generation consider the newer generations to be lacking fundamental education because they never used GOTO based programming languages. I've talked to multiple people who lamented that young programmers have never done assembly or BASIC.


It’s helpful to have a mental model of how the computer works. I don’t know if it’s necessary that one have spent mountains of time building real software using a GOTO/jmp style, but having exposure to it would be nice, rather than hiding it away.

Jeff Dunteman’s assembly programming books included a “chapter 0” that I always loved, and which really stuck with me for how creatively they taught those topics.


I mean, CPUs do a bunch of work to make us believe they still operate just as a fast PDP-11, and I would wager that besides compiler experts that work on the backend parts of compilers, not many people have a real feel for modern hardware (obviously besides those that actually work on that given hardware).

So I'm not convinced that even those who think they know how it works know it actually.


We need updated board games that replicate the function of a CPU.


Assembly? Sure, that has some educational value.

BASIC? That’s just nostalgia for poverty.


Not entirely. GOTO can be pretty nice! And even the lack of structs probably has the advantage of helping to prepare you for today's world where column-major is back in style, for performance reasons.


Dijkstra thought of computer science as a subdomain of mathematics, and thought that hands-on experimentation with actual computers would mostly lead students astray. A program should all be worked out and proven correct before (optionally) feeding it to a computer, and testing and even more so debugging were abhorrent practices.

BASIC, on the other hand, is more aligned with what Seymour Papert later came to call "Constructionism": the student learns by experimentation.


It is the "correct by construction" approach vs the "construct by correction" approach.


Dijkstra was silly because everybody knows that Computer Science is the parent field of mathematics.

Mathematics is the study of all O(1) algorithms.

Computer Science is all other algorithms!


That's how it was with CS at Purdue when I was there in beginning of the 1990's.

It was Computational Science, not Computer Science, and was in the math department.

We did everything wiht pen and paper until I got into my 300 level classes and we got access to the NeXT cubes and IBM 3090.

I ended up switching to networking and the tech track, but it was definitely different...


Ironically, I grew up with limited access to computers, so I wrote many programs on paper first, including a FORTH implementation in assembly language I wrote over summer break with a typewriter, waiting for school to start again so I could actually test it hands on.


“On the Cruelty of Really Teaching Computer Science”[0]

[0] https://en.m.wikipedia.org/wiki/On_the_Cruelty_of_Really_Tea...


https://www.cs.utexas.edu/~EWD/transcriptions/OtherDocs/Hask... - describing why, in 2001, he thought Haskell was a good choice for a first college course in CS.

You can read many of his thoughts here: https://www.cs.utexas.edu/~EWD/welcome.html


He probably thought programming students should be taught Pascal, the academic language he pioneered. It is quite different from BASIC.


Pascal was Wirth, not Dijkstra.


Sorry, misremembering my meager computing history...


My first ever programming language was VBA for Excel because I found a magazine with a tutorial on that laying around in my father's office, no idea why. I think the second one was the Basic on my Texas Instruments calculator.


Enough LISP and Assembler can eventually cure the worst BASIC inflicted brain damage, but some scars remain.


Messing with hardware and Forth... fun times


I've had some discussions with the author on this topic, some of which might have led to his writing this article.

One of the issues we discussed is trying to do this with Java arrays or collections. As he observes, Java arrays have a maximum size of 2^31 - 1 elements, because they're indexed by a signed 32-bit int. In practice, the Hotspot JVM has a maximum array size of 2^31 - 2 elements. (I'm not entirely sure why this is. I'd guess that it's because the various GC implementations have a maximum memory block size of 2^31 words, and two words are consumed by the object header.)

The author considered trying to store 10B elements in a Java Collection, but there are some roadblocks that make this either difficult or insurmountable.

A Java ArrayList stores all its elements in a single array; thus it has the same limitation as Java's arrays. This is a pretty hard limit.

A Java LinkedList can actually store more than 2^31 elements. It's a non-intrusive linked list, so each element has a separate list node, which is another Java object. Each list node object consumes 40 bytes (including the object header and fields) which means that storing two billion elements in a LinkedList has 80GB of overhead, not including the actual data elements. (And ten billion elements will require 400GB of memory for overhead.) This will probably work if you have enough memory, but this amount of overhead seems prohibitive.

The author instead used the `fastutil` library to work around these limitations.

Pretty impressive that Pharo Smalltalk is able to do this.


Yes, very strange to see this flagged. Doesn't seem controversial at all.


Cool! Yes, that’s been our experience in Java as well. The randomized order of unmodifiable Set and Map (from `Set.of` and `Map.of`) have enabled us to make optimizations to the internals mostly with impunity. We’ve done so twice since their introduction and we might do so again.


(from Feb 2021)

Some of the JDK's unmodifiable collections, such as those from `Set.of()` and `Map.of()`, also randomize iteration order. At the time they were added, Go and Python also randomized iteration order. However, more recently, Python moved away from randomization. Early in the Python 3.x releases, dict iteration order was randomized. In 3.6, iteration order was insertion order, but only as an implementation detail. In 3.7, this was elevated to a guarantee.

https://docs.python.org/3/library/stdtypes.html#dict

(search for "dictionary order")

There are occasional complaints about Java's unmodifiable collections' randomized order, as it can occasionally cause intermittent test failures or even failures in production. However, the advantage of the randomized order is that we've been able to reorganize the internal implementations of these data structures -- twice -- without worrying about compatibility problems caused by changing the ordering. Historically this has been a problem with other structures such as `HashMap`. Even though HashMap's order isn't guaranteed, it's predictable and stable, and changing its ordering has definitely flushed out bugs.


IMO the real problem is that sometimes you really do want a deterministic order (not caring which one), but the container doesn't offer any API that provides it.

And since hash-first languages provide a broken API, there's no way to provide it for arbitrary types. Compare-first languages (like C++) generally provide it, but paying the price of tree-based structures (assuming a B-tree can't negate it) isn't actually necessary.

Rather than either of those choices, I argue that languages really should be "key-first" - rather than having classes implement hashing or comparison themselves, they should return a tuple of the data that will be fed into said hash or comparison. Besides being a nicer API, this also prevents many common bugs.


C++ gives both ‘map’ and ‘unordered_map’, and in my experience idiomatic C++ uses unordered_map unless you actually need a tree.


> IMO the real problem is that sometimes you really do want a deterministic order (not caring which one), but the container doesn't offer any API that provides it.

This would be a lot more convincing if you had an actual concrete example where this was true, rather than just insisting that "sometimes" it's true.


Few examples I’ve come across: hashing, float summation, reproducible serialisation


I get two of those, can you tell me more about float summation?


Because the exponent varies, it matters what order we add floats together, if we add them in one order we get a very different answer to another. If we expected a consistent result that's very surprising.

Suppose you have a billion, and also you have a million ones, all as 32-bit floating point numbers. If we begin with the billion, and add each one, the additions do nothing, because in the representation used for the billion a single one is too small to matter, so our answer is just one billion again each time -- but if we begin with the ones, adding the billion last, we've reached a million when we add the billion, that's not too small to matter and we get 1.001 billion.


Use the Kahan summation algorithm.


Kahan solves my examples but in general cannot be relied on to deliver any specific benefit - it's never worse but it may not be meaningfully better either.

Pairwise algorithms can promise better results but aren't applicable unless we're OK with the idea of separately storing the numbers somewhere while we run the algorithm or we provide this hash table with a random access mechanism it may otherwise have no use for.


Huh. I guess somebody could build a specialised container for this purpose, which deliberately has some arbitrary but unchanging ordering. Certainly that looks practical in Rust (indeed it may already exist, I haven't looked).


LRU caching.


Java and JavaScript both have a way to get keys back in insertion order.

It’s pretty simple to implement an LRU cache when you have time ordered iterators.


Note the date line on the article:

    Published on Sunday, January 15, 01989  •  35 years, 7 months ago
Specifically, the five-digit year! Also the explicit listing of the age of the article. Most sites have a “human readable” or “friendly” date such as “published yesterday” but only for recent dates. Some sites, such as news sites, add a warning if the article is more than say five years old. Here, it’s as if they’re proud of the age. Since this was published by the Long Now Foundation it seems likely these were done deliberately.


Yeah, this is a kind of calling card of the Long Now Foundation. See their landing page: "established in 01996 to foster long-term thinking"

https://longnow.org/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: