I think people get consensus finality mixed up with eventual consistency. (To be fair, they do sound like similar things.)
Nakamoto Consensus (used by Bitcoin and others) never reaches consensus finality, i.e. a point in time when you can be sure that consensus has been achieved. All you can do is estimate the probability that a block is in the final chain.
That said, Nakamoto Consensus has some really nice properties compared to classical consensus algorithms. For example, most "classical" BFT consensus algorithms have a worst-case message complexity of O(N^2) (or in some new highly-academic algorithms O(N polylog(N))). Nakamoto Consensus, in contrast, has O(N) worst-case message complexity. That's what enables Bitcoin (and the like) to scale so nicely to thousands of nodes.
Of course, if you're building a permissioned ledger with 25 nodes, that's irrelevant; the classical algorithms will work fine.
There's more discussion of this in the SCP paper: Luu, Loi, Viswesh Narayanan, Kunal Baweja, Chaodong Zheng, Seth Gilbert, and Prateek Saxena. "SCP: A Computationally-Scalable Byzantine Consensus Protocol For Blockchains."
IMO, more interesting property than the messaging complexity is, bitcoin (in theory) allows <50% malicious actors in the consensus group. Lamport BFT only allows <33% malicious actors.
Of course, this comparison is not really fair because Lamport BFT achieves deterministic consensus, where as, bitcoin only gives probabilistic consensus.
I hope there is room for Deterministic BFT with <50% malicious actors.
> or in some new highly-academic algorithms O(N polylog(N))
Could you point me to this algorithm? I am not aware of any BFT solution other Lamport BFT.
There are many deterministic BFT protocols. PBFT is probably the most well-known. [1]
The SCP paper points to several BFT protocols with message complexity of O(N polylog(N)), e.g. [2]
[1] Miguel Castro and Barbara Liskov. Practical byzantine fault tolerance. In Proceedings of the Third Symposium on Operating Systems Design and Implementation, pages 173–186. USENIX Association, 1999.
[2] Valerie King, Steven Lonargan, Jared Saia, and Amitabh Trehan. Load balanced scalable byzantine agreement through quorum building, with full information. In MarcosK. Aguilera, Haifeng Yu, Nitin H. Vaidya, Vikram Srinivasan, and RomitRoy Choudhury, editors, Distributed Computing and Networking, volume 6522 of Lecture Notes in Computer Science, pages 203–214. Springer Berlin Heidelberg, 2011.
This really highlights the differences between trying to understand and formalise the theory behind distributed systems and trying to practically build and use distributed systems.
In a practical sense, bitcoin pretty much has the property of strong consistency. That is a useful thing for people to know and understand. However, none of the theory that applies to strongly consistent systems necessarily needs to apply to bitcoin, because bitcoin is not theoretically perfectly strongly consistent. This is also important for people to understand, as otherwise they may incorrectly assume that certain results apply to bitcoin, e.g. the CAP theorem, leading to confusion.
So, I really do believe it is important to differentiate between talking about the properties of distributed systems in a theoretical sense and talking about them in a practical sense. The current formalisations of distributed systems have mainly been built to understand the theoretical properties, which while can be useful as a starting point to understand the practical properties, its always important to note that distributed systems may have good enough approximations to useful theoretical properties for a particular use case.
From the standpoint of a single transaction, Bitcoin is consistent eventually. It's certainly not consistent immediately. Only when the transaction has been accepted into the blockchain and many blocks built upon its block is it effectively irrevocable. Early in the transaction's life, it's still revocable.
(The people behind Satoshi Dice found this out the hard way. They accepted bets with zero confirmations. Someone realized they could cancel a losing bet after losing by double-spending. Oops.)
I think the greatest engineering feats, computational or otherwise, tend to be created by insight into what the actual requirements of a system are and whether they enable the use of previously-ruled-out classes of solutions.
Like several others (who got downvoted, I don't know why) who have pointed out - this article provides no citations and confusing reasoning. The definition of "eventual consistency" is that all systems converge to the same state after enough time. Yet the author seems to be saying Bitcoin is strongly consistent because (if we ignore things in their variable state) it will converge to an agreed upon state.
Apparently it is up to you how you want to interpret the use of "strong" or "eventual" here.
Given that the blockchain can split at any time with a probability based on incentive that approaches infitesmal values as chain size goes to infinity, the blockchain is not even eventually consistent. It is never consistent, only probabilisticly consistent.
Numbers that fool the Fermat test are called Carmichael numbers, and little is known about them other than that they are extremely rare. There are 255 Carmichael numbers below 100,000,000. The smallest few are 561, 1105, 1729, 2465, 2821, and 6601. In testing primality of very large numbers chosen at random, the chance of stumbling upon a value that fools the Fermat test is less than the chance that cosmic radiation will cause the computer to make an error in carrying out a "correct" algorithm. Considering an algorithm to be inadequate for the first reason but not for the second illustrates the difference between mathematics and engineering.
What the probabilistic approach misses is that the blockchain can split for other reasons, so the exponentially small chance of inconsistency is irrelevant. For instance, in 2013 the blockchain was split for 6 hours (24 blocks) due to a block with more than 10,000 transactions that different software versions handled incompatibly.
The other thing the probabilistic discussion skirts is that even with an exponentially small probability, you need to wait inconveniently long. A Ω value of 6 corresponds to 1 hour. To get down to "alpha particle" levels of probability you've got a much longer wait.
How is the nonzero probabilistic chance of a split not a problem considering that vendors are relying on the durability of transactions that may appear in one chain but not the other?
tldr: author claims if you only look at transactions with 6 or more confirmations, it's strong consistency (or rather, "your chances of observing an anomaly are exponentially small")
Anyone not using Satoshi's 6 confirmation rule is No true Scotsman.
No statistics are done on actual, real-life blockchain orphans to prove his claims.
It makes a difference to mining centralization. If you have a short confirmation time, it decreases the variance in revenue for individual miners and thus the incentive to join a mining pool.
Similar for the mining algorithm itself; memory-bound algorithms decrease mining centralization because ASICs become less useful.
How is omega not equivalent to the time interval in common EC strategies. Also, saying that "EC" doesn't guarantee serializability sounds strange. It does, over t. And yes, CAP is widely misunderstood, but I'm not sure this is the fault of the academics.
Sounds kind of like this was written to a certain audience that is in a bubble wrt the wider field.
> Because the probability drops exponentially with Ω, it's easy to pick an Ω such that the chance of observing a blockchain reorganization is less likely than having the processor miscompute values due to strikes by alpha particles. If that's not sufficient for pedants, one can pick a slightly higher Ω such that the chances of observing a reorganization are lower than the likelihood that all the oxygen molecules in the room, via random Brownian motion, will pile up into one corner of the room and leave one to suffocate in the other corner.
This guy obviously doesn't understand what he is talking about. I would more probably remine the whole bitcoin blockchain on my cpu in a second than this thing happens.
Nakamoto Consensus (used by Bitcoin and others) never reaches consensus finality, i.e. a point in time when you can be sure that consensus has been achieved. All you can do is estimate the probability that a block is in the final chain.
That said, Nakamoto Consensus has some really nice properties compared to classical consensus algorithms. For example, most "classical" BFT consensus algorithms have a worst-case message complexity of O(N^2) (or in some new highly-academic algorithms O(N polylog(N))). Nakamoto Consensus, in contrast, has O(N) worst-case message complexity. That's what enables Bitcoin (and the like) to scale so nicely to thousands of nodes.
Of course, if you're building a permissioned ledger with 25 nodes, that's irrelevant; the classical algorithms will work fine.
There's more discussion of this in the SCP paper: Luu, Loi, Viswesh Narayanan, Kunal Baweja, Chaodong Zheng, Seth Gilbert, and Prateek Saxena. "SCP: A Computationally-Scalable Byzantine Consensus Protocol For Blockchains."