Hacker News new | past | comments | ask | show | jobs | submit | jakecraige's comments login

Another vote for SynthRiders here. I have the same complaint about Beat Saber as OP. Can be done with wrist flicks and be lazy

Synth Riders forces you to have to move your body and do large sweeping motions, it feels like dancing, AND it has online multiplayer with voice chat which regularly has groups hanging out and playing for fun

That last part makes it so much more enjoyable for me


The main app powering coinbase.com, the “monorail”, is a Rails app backed by MongoDB. It’s typically what’s being referred to when breaking out microservices.

Historically most other services were Rails/Sinatra with Postgres. These days there’s a lot more Golang being used for new services.

There’s also some services that are serverless, using Lambda and DynamoDB but these are a minority.


Thanks. But is Ruby considered something deprecated? I'm mainly a Ruby guy and want to keep it that way is why I'm asking.


We (Coinbase) already sign cold storage transactions in a distributed manner so this isn’t an issue :)


Brian calls this out and describes how Coinbase plans to do this in the “This will require a huge shift in how we do things. How will we get there?“ section.

> To address all of these, we will form a cross-functional team to oversee this transition. This group will identify the changes we must make to become a remote-first company (e.g., around people management, recruiting/talent, culture and connection, and documentation and async work…), host open design sessions with all of you to surface ideas, considerations, dependencies, and concerns, and partner with internal experts to redesign how all of this works for a remote-first Coinbase.


This would work for a different definition of “remote-first”.

In the definition I believe makes sense there is just one defining trait, and the author manages to tiptoe around it.

It’s unclear whether the changes promised are cosmetic (to convince remote workers they aren’t disadvantaged) or fundamental (actually making remote staff equivalent or even prioritized first before on-location staff).


It’s not assuming MITM or that the attacker can upload the signature to the site.

The attack is that the attacker can reuse the already uploaded signature in a way that allows them to get certificates issued under their account instead of the initial owner.

This blog is a little confusing about that since it does read like they are supposed to upload their own sig with the graphic used.

This post and linked IETF report is a little more clear: https://www.agwa.name/blog/2015/12


BTW this is not yet in the book, so if you have any suggestion to make the explanation or the diagrams clearer I will take them :)


Matthew Green has a related post about this speaking to “multiple encryption” where people do the same thing with ciphers. [1]

A very generic take would be that depending on the system, it may be able to be done securely, but with all crypto there be dragons.

For example, let’s say you have two hashes H1 and H2 and want to use the double hash to prove existence of a file in history. Publish the hash to a blockchain or something.

So the file is hashed H1(H2(file)). In what ways could you break this if one of the hash functions is broken?

The first way is if you want to dispute the validity of what file was hashed. If we assume they later publish the file and you have a second-preimage attack on the hash.

You can create a different file where H2(file) = H2(newFile), and because H1 is deterministic, this second file verifies. It’s now no longer clear which is the true file. While a single hash function also fails under this attack, you increase your exposure to possible attacks by introducing a second one.

If you have control over the verification procedure you can imagine a similar attack with only a break in H2 by not even using H1 to generate the output.

[1]: https://blog.cryptographyengineering.com/2012/02/02/multiple...


The way to combine hash functions for collision resistance is not composition (as with encryption) but concatenation: H'(file) = (H1(file), H2(file)). Now to have a collision on H' you need to collide both H1 and H2. But now pre-image resistance suffers.


Checking two full hashes and requiring both to match only improves pre-image resistance. However, you now need twice the space to store the hash and efficiency suffers, likely wose than the sum of the speeds due to cache effects of running two different algorithms on the data. If you use shorter or weaker hashes you might end up with two breakable hashes (either now or by some potential quantum computer) rather than one unbreakable hash.

Some package systems store multiple secure hashes and pick one at random to verify.


I haven't implemented but like what it's going for. Modern crypto is all about making it easy to do the right thing without thinking about it and PASETO does that well with its versioning scheme and relying on established crypto from libsodium.


Nonce reuse is easily preventable with deterministic nonces or sufficient randomness. PS3 used a fixed nonce which is a whole different problem.

I agree that JWT has all sorts of flexibility that make it hard to use well but NIST curves work just fine.

If you think they are backdoored then sure, Ed25519 is a better option but real world constraints may require you to use a NIST curve for now.


I don't think the NIST curves are backdoored, but they obviously have some serious theoretical and practical issues. [1]

As far as I know, neither of these issues is relevant to their usage with ECDSA (although invalid curve attacks should be a good enough reason to avoid using these curves with ECDH completely), but experience with SHA-1 and RC4 has thought us that algorithms with theoretical problems are likely to be practically broken sooner or later.

But NIST curves are not even the main issue with the ES* algorithms in JWT. The real issue is ECDSA:

1. Verification is slow. P-256 is about 2-4 times slower than Ed25519 [2]. This kind of speed hit may often be unacceptable. 2. Nonce reuse is an issue. The PS3 implementation was extremely bad, but random number generators can often be broken. This is not a theoretical issue - it used to happen verify often with docker containers until recently. There are alternative schemes that allow using ECDSA with a deterministic synthetic nonce [3], but this is not supported by any JWT implementation I know of. Ed25519, on the other hand, uses a synthetic nonce.

[1] https://safecurves.cr.yp.to/ [2] https://bench.cr.yp.to/results-sign.html [3] https://tools.ietf.org/html/rfc6979


They have a bug bounty program that gives you permission to do certain kinds of things and it not be illegal since you’re planning to report anything you find (and get paid for it)


Y'all are talking about two different things.

Secret key recovery via nonce reuse(linked SO post) is a different than simply trying a range of integers which is mostly what these researchers did.


There are many addresses created from integers under 1000000. This is nothing special. There are also many addresses created from basic words converted to sha256 and the used as the primary key hex. Eg, ‘Satoshi Nakamoto’ .


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: