Hacker News new | past | comments | ask | show | jobs | submit | _d4bj's comments login

Most languages with monads have tools for working with them. In Haskell, for example, even without `liftM2` or `sequenceM`, you can simply do

    case (xm, ym) of
        (Just x, Just y) -> x + y
        _ -> 0


That's got nothing to do with monads though, you can do that in Swift or Rust which have a monadic option type but don't actually have monads:

    match (xm, ym) {
        (Some(x), Some(y)) => x + y,
        _ => 0
    }


Oh, come on. Is yielding 0 there really the best approach? It's a copout. You want full referential transparency.


About ten of us are playing in https://modalduality.org/sibylant-graze/play/lobby if you'd like to join!

A bit about the technical stack: the server was written in Haskell using WebSockets. Source at https://git.modalduality.org/sibylant-graze/tree/.


In general forward security involves generating a new key for each conversation with Diffie-Hellman or similar.

According to https://crypto.stackexchange.com/questions/5610/diffie-hellm..., Diffie-Hellman is not post-quantum secure, but https://en.wikipedia.org/wiki/Supersingular_isogeny_key_exch... is a DH alternative that is.


Which is partly why we wrote and open sourced a SIDH implementation in TLS 1.3: https://blog.cloudflare.com/sidh-go/


I <3 cloudflare, you guys are awesome!


Do you have any performance numbers for SIDH vs ECDHE?


Cloudflare has a Go implementation of SIDH with p751: https://blog.cloudflare.com/sidh-go/

Here’s an overview of the performance from a patch by Armando Faz Hernandez: https://github.com/cloudflare/p751sidh/pull/2


The main idea is to explore the concept of skinny triangles being unaesthetic (and thus undesirable for triangulations). This is true for many tasks in computational geometry, but not so true in art. Bringing these two together shows jarring conflict where the Delaunay Triangulation and your brain's sense of aesthetics disagree.

It's not really supposed to be useful, just exploration in reduction of an already reduced art style: what is lost when we enforce that all shapes are triangles that tend to be more large-angle? For one, I think this destroys the perspective since one common trope is to make objects narrower as they are farther away.

Although the entire thing is mostly a joke based on their shared last names.


Thanks for your answer.

For what it's worth I think the result is incredibly adapted to the art style; I think you overestimate your conclusions and I don't think that is fair to extrapolate claims over the importance of skinny triangles in art in general.

This representation conveys most of what the originals had to offer, and has the added benefit of being a light content in a meaningful format. It's a good sign that your questions pulled me into the subject but I don't think there's much more to extrapolate from the experiment.

Now how about trying delaunay triangulation on animated content with the added constraint of optimising for fluidity? (Like the works on incorporating art styles to pictures and videos from last year that you may have noticed, and that were published in siggraph.)


> What makes it okay to identify the naming of a large number with producing that large number of things (e.g. 1's)?

Nice catch! I glossed over this but this is pretty important. The issue is that for any _specific number_, it takes a constant amount of time to write a number larger than that. Related: solving chess is technically O(1) since there are a constant number of positions we have to check.

What we need instead for this concept is a series of numbers, those can be compared with the question "Does one eventually get to a point where it will always be larger than the second?". We can parametrize the "biggest number" by the number of seconds given, say we can write down one character per second. Then a person's largest number function might be something like "f(n) = 9 ^ n", which is easy to see _is_ a computable function, and thus smaller than BB(n) for the same number of seconds, eventually.

In practice, just something like BB(100) - or BB(BB(100)), even can't really be written out by any computable method in a finite amount of time.

> which was: what characterizes the Busy Beaver function?

The proper one is just the number of 1s, but I talked about the shift function because it's easier to see why it's uncomputable. Tibor Rado proved that the Busy beaver function is also uncomputable in his 1962 paper, if you're interested you could check out his reduction.


Thanks! I should probably include some more details on how to program the simulator.

Turing machines are just one simple way for formalizing programs, without all the complexities of modern programming languages. The core idea is that whatever a supercomputer (or any future computer) can compute, so can a Turing machine (it might just take more time): this is called the Church-Turing thesis.

The specifics on how to program Turing machines don't actually matter too much in modern complexity theory, but the rigorous basis is there when needed.

--

A Turing machine is always in some "state," (starts at A) and it's head is always over some cell of the tape, initially all zeros. Then the program, which is the grid in angelic hierarchy, specifies what the Turing machine should do. If it's in state A and the cell the head is at is 0, just go to the corresponding cell (marked in bright red), it will tell you the next state to go to, a number to write down on the tape, and a direction to step toward (Left or Right).

For intuition about why more states allow for more complex programs, try writing a 1-state program that writes two 1s and then halts (it's not possible). Then try it with 2 states.


Thanks for the awesome thought provoking game!

Besides being able to store 1's and 0's on the tape, a Turing Machine program can encode data by the state that it's in (which is kind of the whole point). I think that's what goldenkey is getting at when he said "you can use the code as memory". The fact that you're in this code or that code is how it remembers.

So to "pick up" a bit from the tape, move it somewhere else, and place it down, first you'd design a state machine to "move somewhere else" (whatever that means, let's say simply move 10 steps to the left), and then you'd duplicate it so you have twice as many states, one for "carrying a 1" and another for "carrying a 0".

When the read head sees the 1 or 0 on the tape (the source data), it branches to either the first state of the "carrying a 1" or the "carrying a 0" copy of the "move somewhere else" program, to move to the destination location on the tape. Each of those sub-programs would then have a final state that respectively writes a 1 or a 0 onto the tape, and then they can (but don't have to) merge back together into the same state (continuing the main "calling" program).

So the more bits you need to temporarily store in the imaginary Turing machine head that moves around (similar but not identical to a register), the more states you need (which can explode exponentially for complex programs)! So you could move two bits at a time by making four copies of the "move somewhere else" program for "carrying a 00 .. 11", and so on.

Instead of using a loop counter to loop four times, you could effectively just duplicate the states of your program four times, linking the end of one to the beginning of the next.

Making a conditional loop that loops a bounded number of times is a lot harder (and that's kind of what the game is about), requiring a whole lot more states, perhaps linked together in a switch structure like some sort of "Duff's device", or using some splice of tape for temporary storage.

It's easiest to simply represent numbers in unary (7 base 10 = 1111111 base 1) instead of binary, using 0 as a "stop bit", because memory efficiency isn't a concern (you have infinite tape, might as well use it!).

Manipulating binary numbers with Turing machines is a big deal since there is no direct support for it, so you have to implement it in "software". If you thought software floating point was slow, you haven't tried using a software binary library for Turing machines!

So to pick up and copy a unary number (as opposed to a single bit), bounded from 0 to n, you would have to make an initial state that looked at the first bit. If the first bit was 0 then it branches to the 0'th copy of the "move to somewhere else" program, which at the end branches to the "write the 0 stop bit" state. If the first bit was 1, then it would move right and test the next bit. If the second bit was 0, it would do the same as before, but the "move to somewhere else" program would be modified to account for having already moved one to the right, then it would write 1, move right, and merge back with the "write 0 stop bit" of the 0'th branch. And so on to n, so you have n modified copies of the "move to somewhere else" program, one for each possible value of the unary number, all merging back together into the final "write 0 stop bit" state...

Or you could encoded unary numbers as their value plus n consecutive 1's, then you could reserve patterns of fewer than n consecutive 1's as special control patterns or tags, which the "move somewhere else" functions could scan for, and not confuse them with unary numbers. Anything's possible, and you can use different encodings in different parts of the tape, just like normal computers. There is no one right way to program a Turing machine!

It's pretty tedious, and it would be easier to program with a macro assembler or vhdl-like tools (or FORTH, which was used for programming the CAM6 cellular automata rules). I wonder what high level Turing machine programming and debugging environments people have developed?

In the Philosophy of Computing course I took at UMD we used "PCOMP" TM simulator that ran on a PC, which was pretty good for its time but was like trying to construct a mnemonic memory circuit using stone knives and bearskins by today's standards.

http://terpconnect.umd.edu/~cherniak/philcomp/

The TM simulator in this Angelic Hierarchy game has an elegant interface, simple enough to make it an easily understandable and playable game, and it would be great to see how far it could go towards developing a full blown Turing machine visualization, experimentation, programming and debugging environment!

Thanks to hga's impeccable hoarding instincts, here is Marvin Minsky's enigmatic Universal Turing Machine written in TECO:

https://news.ycombinator.com/item?id=10161002

  MSG: APL    1     
  DISTRIB:  *BBOARD
  EXPIRES: 03/17/81 23:08:54
  MINSKY@MIT-MC 03/11/81 23:08:54 Re: too-short programs
  APL is compact, I suppose.  So is TECO.  When I wrote the following
  Universal Turing Machine, which works, I actually understood it.

  i1Aul qq+^^0:iqm^[29iiq\356y0L1 00L1 11L2 A1L1
  y0L1 0yR2 1AR2 AyR6 yyL3 00L0 1AL3 A1L4 yyL4 0yR5 11L7 A1L4
  yyR5 0yL3 1AR5 A1R5 yyR6 0AL3 1AR6 A1R6 y0R7 0yR6 11R7 A0R2
  ^[j<sR^[;-d-2ciql-^^^[ci"ed^^^[cii^[ciuq'^[>
  j<sL^[;-d-2ciql-^^^[ci"ed^^^[cii-2c^[ciuq'^[>jxblx1lx2lx3lx4lx5lx6lx7hk
  iyyAyyAyy^[32<i0^[>ji110101110000010011011^[ 1uq<htmbqq=>

  I do not advise attempting to understand this code, which is
  almost as bad as that for the Universal Turing machine.


In case you think you've figured out the 2-state solution and you want to check that you can't do better, I've uploaded the number of 1s your machine should output at https://modalduality.org/static/busy-beaver-2.txt, but did not include the actual configuration.


I'm confused, because this answer seems to contradict what "a little grunt work will ascertain" in the article you linked as inspiration


Ah, so Aaronson is talking about the Busy beaver shifts function of 2 there. I gave the answer to the busy beaver function counting the maximum number of 1s.


Gotcha, thanks!


In addition to 4Clojure, I found the Brave Clojure book (https://www.braveclojure.com/) helpful for understanding how to structure programs in a "Clojure-y" way, e.g., to a greater extent than OOP representing data in lists and maps instead of structured classes, or using the "nil" value effectively when chaining functions together, almost like the Maybe monad.


Is it easy to write timing-attack-resistant crypto code in Clojure? Sounds interesting, what company was this for? Thinking of implementing Damgard-Jurik myself sometime.


It was just a proof of concept thing for a hackathon, hardly industry-grade, but was quite fun to write.

The crypto code was quite fun to write in Clojure; literally yesterday I just got permission to open source it.

If you want to play with it, the code is available here: https://github.com/tombert/scudlib

Since I have the ability work on it in my free time, I'll probably add more schemes.


yup! they should have used a commitment scheme instead - for example, SHA256(winning numbers || 512-bit nonce). To reveal, reveal the winning numbers and the nonce and Mr X can check that the hash matches. The nonce is to introduce more entropy so Mr X can't just bruteforce the hash, but since finding any collision is difficult, it doesn't let the alien cheat by trying different nonces (unlike with different keys with encryption schemes).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: