Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
REM: Resource-efficient mining for blockchains (acolyer.org)
45 points by monort on Oct 2, 2017 | hide | past | favorite | 18 comments


Isn't this just exchanging proof of work for proof of Intel? Why not just have Intel run a centralized database with widely replicated logs?


If I'm understanding you, this seems like unwarranted cynicism, and I say this as someone who is quite cynical of blockchain technologies.

Would you also criticize the current state of Ethereum as "proof of Nvidia" and imply that Ethereum is centrally controlled by Nvidia, just because Nvidia makes the GPUs that many people are using to mine it?


If u/whoah and I understand the proposal correctly, Intel has the ability to mine as many blocks as it wants in a REM blockchain, free of cost, by signing a fake IAS or whatsitcalled.

Nvidia does not have this ability in Ethereum. They have to pay for electricity just like everyone else.


Ah. I hadn't caught the part where it relies on Intel's Attestation Service. That is indeed weird.


PoW is how bitcoin blocks are “sealed” forever, which makes bitcoin possible, how is that not useful?


each mined block is like a digital pyramid of Giza made of a massive number of computations sealing the transactions into a mathematical tomb. really neat.


And just like that, there are much less costly ways to seal something than mass manual labor.


But none as impressive as the seven world wonders.


To whatever degree the "useful" work has economic value, that value offsets the expenses of a miner attempting a 51% attack. In PoW it's only the wasted work that secures the currency.


Ethereum could theoretically implement a similar scheme with running contracts on the EVM (in some ways, it does already). But instead of hinging on Intel to act honestly, it could use fraud proofs instead. It would significantly increase the overhead, but less so than PoW.

Alternatively, PoS is probably just fine. I'm not sure this is really better in any way.


EVM's efficiency is abysmal, since operations are replicated almost everywhere.

They are on the early stages of developing sharding/"sub-chains", but it's still unclear if it's even viable. And it will never be a truly efficient computing platform -- for them it must be completely secure and only cheap enough to enable the most interesting distributed apps, which is a different beast from PoUW discussed here.


With fraud proofs you don't actually run the computations on all nodes. You submit a claimed solution and a security deposit, in a way that lets other people submit a proof that you did the calculation incorrectly. If you submitted an incorrect solution, your deposit is paid to the person who made the fraud proof. The only replication is by the people who check your work in hopes of claiming your deposit.


Interesting, can you give me a good link on this?

Overall I'm skeptic (even of PoUW). That's because verifying work is usually almost as difficult as computing. I believe the only true solution to this is Fully Homomorphic encryption (FHE), but known constructions so far have had absolutely massive constants.

Is there enough demand of highly-assymetric and verifiable NP-hard like problems?


A simple way would be to implement the entire work on chain; a challenger has to pay for the entire work on chain, but in theory that rarely happens if the deposit is large enough. Anyone thinking about submitting the wrong answer has to consider that it's easy for prospective challengers to check the work before running the on-chain fraud proof.

More complex methods allow fraud to be proven with only a partial computation, by applying merkle proofs. I haven't really dug into it but the TrueBit paper is probably the best source: https://truebit.io/


> That's because verifying work is usually almost as difficult as computing.

Where did you get that idea? What about zk-proofs, like SNARKs and STARKs? I'm working on a multiparty computation system right now that uses a proof system to offload costly computation.


Oh interesting. Like I wrote, I'm aware there are certain problems that can be verified efficiently, that is, whose computation time is much larger than verification time, ver_t/comp_t<<1, but is it possible to take a generic program and offload it with (1) cryptographically secure verification afterwards and (2) ver_t/comp_t << 1?

As a challenge, take the problem of computing 1000000 iterations of a memory-hard hash function, such as scrypt or argon2d. If someone presents you a hash (of length N), the best you can do to verify it is actually compute the iterated hash function (assuming it has no cryptographic weakness). This would prove, in particular, that my required method is impossible (i.e. there are at least some problems that can't be efficient offloaded).

What kind of problems can the technologies you cited offload?


All sorts, but those that are easy to convince yourself about are any arithmetic or Boolean circuit.


>delid cpu

>skew the rng and/or manipulate the instruction counter

Congratulations, you "hacked" REMCoin.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: