Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The basic question is how to scale; off-chain or on-chain. The rest is just theatrics and typical nerdy hyperbole.

One side of the fight (Core / blockstream) wants to scale off-chain, pushing transactions to side-chains and/or lighting networks, and want to profit from off-chain solutions.

The other side of the fight (segwit2x / miners) wants to scale on-chain, making the blocks bigger, and profit from block fees.

Both sides have pros and cons.

Pros of off-chain solutions - more scalable, don't need expensive confirmations for each transaction, more long-term. Cons: the solutions don't exist yet and might be vaporware; segwit etc are just stepping stones.

Pros of on-chain solutions - making the blocks larger can be done now, no need to wait for new software and new networks. Cons - makes the blocks larger, which makes running bitcoin nodes harder. Also cannot scale this way infinitely (you need to keep all the transactions on a disk forever).

The discussion about segwit is in reality just discussion about how to scale, and who profits.

As for me, I don't really care, Bitcoin is inefficient either way



> Also cannot scale this way infinitely (you need to keep all the transactions on a disk forever).

The whitepaper itself mentions that the Merkle tree can be pruned so that doesn't need to be the case. Do you know why there hasn't been more effort towards implementing that yet?


AFAIK the paper is referring to thin clients, right? You can already run those with the stock Bitcoin Core client. There's a command line argument that tells it to prune any blocks older than X. The disk space required by such clients is X times the maximum size of blocks + the size of the UTXO.

But you still need full nodes who maintain all historical data. Because, AFAIK, there's no way other way to trustlessly establish a UTXO without rescanning the whole blockchain. Maybe there are ways; I'm not that deep into Bitcoin's internal at the moment.


I think perhaps a soft-fork is possible where you store a hash of the UTXO pool in e.g. the coinbase transaction. This way you can verify a given UTXO pool by just checking the hash.


The leaves still grow with the square root of the tree size, which is certainly better than linear, but doesn't really solve the problem. It buys you one less order of magnitude you have to scale to, but only one.

If you're growing by an order of magnitude every few years, it's just buying you an extra few years lead time.


If we only wanted to track the current unspent outputs we're sitting at 54M which interestingly has been holding relatively constant since June. Where as the total number of transactions (size of blockchain in general) is at 240M and is strictly monotonically increasing.

See UTXO here: https://blockchain.info/charts/utxo-count

See total TX here: https://blockchain.info/charts/n-transactions-total

Every block for the near future will include one new UTXO, but it can increase, decrease or keep the total number of UTXO the same.


It is implemented, the command line option is a -prune=<n>


It would appear the Bitcoin network has a hard time agreeing on breaking changes :)


I think you understate the cons of the on-chain solutions. This said as a big-blocker myself.

I think the size of the node is a consideration, but not a huge one -- before we started butting up against the limit, blocks were naturally much smaller than the 1MB limit -- the fear of spam and dust transactions never materialized, so there's no reason to expect that the economics are fundamentally different now.

The biggest con by far is the hard-forking nature of the change. Changing the limit means that clients will have to accept blocks that they currently don't accept. This applies to core, of course, but also to the innumerable other implementations of the bitcoin protocol. Don't make the mistake of thinking that core is the only player here -- a hard fork is trouble for everyone, miners included, and has a substantial risk of causing two viable chains to appear, which is much more of a disaster for Bitcoin (because of how long it takes to adjust difficulty) than it was for Ethereum. With sufficient consensus, this becomes less of a problem for the same reason -- the defunct 5% chain (with 95% consensus) will take 280 days to converge on a new difficulty, during which time that network will be extremely congested. But if miners are misconfigured, etc., even if there is consensus there may be an unintentional hard fork.

The second biggest problem is the rise in the risk of orphaned blocks. This, I think, is probably what holds most miners back -- the longer it takes to transmit and verify a block, the greater the chance of a block being orphaned is. Orphans hurt the security of the network by making it so that a smaller fraction of the net hashing power is actually being applied to the problem of securing the network. As much as people complain that the proof-of-work is "wasteful" in the ecological sense, wasting even that wasted work would make nobody happy.

Segwit, on the other side of the debate, has the attachment to off-chain scaling, but also offers a solution to transaction malleability, which is a huge problem for bitcoin businesses as it can be difficult to tell whether a transaction ever made it to the network, much less rely on features like child-pays-for-parent. Since the current version of segwit is a soft fork, it should, in theory, work with existing non-core clients, and the Litecoin adoption should give us some information on how that works in practice, but Litecoin doesn't have nearly the adoption that Bitcoin does, so it's hard to generalize.


> One side of the fight (Core / blockstream) wants to scale off-chain, pushing transactions to side-chains and/or lighting networks, and want to profit from off-chain solutions.

I think that gives the wrong impression. First off, Core != Blockstream.

Secondly, the consensus amongst Core developers, as I read it, is they want as much on-chain as possible. Their definition of possible is what can a Bitcoin client handle on the average user's PC. The problem is that, today, the answer is not much. Increasing the blocksize to allow more transactions requires an exponential increase in the computational requirements of the Bitcoin client. So it's not really feasible to just "scale on-chain", as of today.

SegWit accomplishes two things. 1) It's a stopgap. It's an effective blocksize increase to 2MB, and it enables lightning network which should hopefully reduce congestion on the Bitcoin network. 2) It's an optimization; SegWit transactions are cheaper than traditional transactions.

SegWit is just a stopgap while developers implement a series of optimizations to the Bitcoin network that allow it to handle more capacity, without increasing the computational resources to the point where average users can't run the software.

But those optimizations are going to take a long time.


> Core != Blockstream

The overlap is very large. In the same way, segwit2x != Chinere miners, but come on, there is an clear overlap.

> It's an effective blocksize increase to 2MB

If all users start using segwit addresses and segwit transactions, and if all wallets software will know how to send and receive it. And that will mean users will need to send their money to their segwit-enabled addresses first, which might, you know, clog the network. (In reality, nobody knows what will happen. Almost nobody uses segwit on litecoin right now, but litecoin is a toy currency...)

hardfork would solve that issue immediately - no need to upgrade wallets, no need to figure out how do segwit-inside-P2SH addresses work (they are different from normal addresses!).

> enables lightning network which should hopefully reduce congestion on the Bitcoin network

Again nobody knows. Lightning network might be a vaporware. Literally nobody is using it, since it does not exist.

> SegWit transactions are cheaper than traditional transactions

Well, that's a matter of "policy", and I am not sure if segwit2x nodes will implement the segwit discount or not. There were some issues about that, but I think they didn't have time to remove the segwit discount yet. But they plan to.

(Again I am not advocating big blocks, increasing blocks that way is not something that can be done forever. Segwit is fine, it solves some issues like maleability, but it's overhyped IMO. There is no scaling solution for bitcoin yet. And maybe there will not be?)


> In the same way, segwit2x != Chinere miners, but come on, there is an clear overlap

there's more overlap with core developers and chaincode labs.

> Lightning network might be a vaporware. Literally nobody is using it, since it does not exist.

it does exist, https://github.com/lightningnetwork/lnd and http://lightning.community/release/software/lnd/lightning/20... and other implementations of lightning also exist.


Interesting. I will try the docker example later

BUT you have to admit it is still in research and not widely used/accepted/tested by users


Chaincode shares at least one affiliate with Blockstream.


> If all users start using segwit addresses and segwit transactions

SegWit blocks can vary from 1MB in non-ideal conditions to near 4MB in ideal conditions. 2MB is just the average.

You're right, yes, ideally we'd want everyone using SegWit transactions. But we don't need all users to start using them to start seeing congestion relief.

I would expect 90% of transactions are generated by 10% of Bitcoin's users. Maybe even more skewed than that. (That 10% is likely businesses, power users, etc). Getting those 10% to switch to SegWit transactions will be relatively easy; they'll feel the value of reduced fees more readily.

It's likely Bitcoin Core would start using SegWit addresses by default some time after SegWit is activated. Since the majority of users use Bitcoin Core, the shift wouldn't take too long.

> hardfork would solve that issue immediately - no need to upgrade wallets, no need to figure out how do segwit-inside-P2SH addresses work (they are different from normal addresses!).

Hardfork _does_ require upgrading wallets. In fact, it requires upgrading _all_ wallets. Anyone who hasn't upgraded cannot participate in Bitcoin any longer if a hardfork is used.

Contrast that with SegWit. SegWit doesn't require that everyone upgrade. And yet everyone will benefit from the increased block capacity, even those not using SegWit addresses, because the fees overall will go down.

> Again nobody knows. Lightning network might be a vaporware. Literally nobody is using it, since it does not exist.

The plan was for SegWit to activate, providing a more immediate ~2MB increase in block size, and then mid-term we'd see things like the lightning network come online and provide further relief. Long term developers would continue improving network efficiency which means the existing blocks can carry more transactions, and also eventually a hardfork to add some mechanism for increasing blocksize (that doesn't require further hardforks).

> Well, that's a matter of "policy"

I didn't mean that SegWit transactions were inherently cheaper in terms of fees. Sure, fees are always policy. But, AFAIK, SegWit transactions are actually cheaper in terms of network load. The discount on SegWit transactions is not just to expand the blocksize; they're also discounted because they put less strain on the network compared to old-style transactions.

EDIT: And to be clear, the SegWit discount we're talking about is a discount on the weight of a SegWit transaction's size in calculating the total blocksize. AFAIK, there is no fee discount on SegWit transactions; they pay the same per byte fee. SegWit2X has no way to remove that discount without changing consensus rules. So that specifically is not a matter of policy; it's matter of consensus. It would thus required another hardfork to change those rules.


> Secondly, the consensus amongst Core developers, as I read it, is they want as much on-chain as possible. Their definition of possible is what can a Bitcoin client handle on the average user's PC

I'm curious if you have some sources on this -- most of the anti-big-block claims that I have seen tend to settle on the hard fork being problematic, which is a very legitimate criticism (thought I need to dig up citations to play by my own rules here). There's also a significant group (I think unaffiliated with Core) that worries about the increase in orphanage that big blocks can lead to.


Sources would be nice, but I don't have the time to dig through the mailing lists for quotes from developers. To be fair, those claiming that Core is conspiring to move transactions off-chain would also need to cite sources.

But quotes from developers are not particularly interesting anyway. What's far more interesting is what the Core developers have actually accomplished. Actions speak louder than words.

They released SegWit which _is_ a 2MB upgrade to blocksize, as a softfork. It's exactly what the community wanted at the time (2MB), without any of the downsides (hardforking).

In addition to that, it increased transaction efficiency, fixed bugs, and enhanced the protocol.

SegWit is a thoughtfully designed protocol upgrade that accomplishes an order of magnitude more than any "upgrade" proposed by any of the alternative clients (Bitcoin-XT, Bitcoin Unlimited, and now SegWit2X).

Post-SegWit the developers are already looking at Schnorr signatures to further increase throughput. They've worked tirelessly to optimize signature verification times and improve the efficiency of the P2P protocol.

Why would they make all those efficiency improvements if their "evil plan" was to move everyone over to a lighting network?

EDIT: To be clear, yes I realize not providing sources is a total cop out :P But not everyone has limitless time to peruse the mailing lists for quotes. I just try to provide my best understanding of things, given my experience working with Bitcoin and keeping eyes on the community and ecosystem.


Do any of these optimizations even get close to a 10x improvement? The network needs a 10000x improvement.


> The network needs a 10000x improvement.

Says who?

Would I like Bitcoin to support 10,000x more transactions per second than it does now? Of course. But just because I _want_ it to, doesn't mean it's magically going to be able to. Bitcoin is what it is.

I'd like ships that can go faster than the speed of light, but physics isn't going to bends its rules just because I want it to.

We cannot, physically, increase Bitcoin's throughput 10,000x. We could change one constant, the max blocksize, make it 10GB and ... oops, now the network is broken. No one on Earth can actually validate 10GB blocks in the span of 10 minutes. And even if they could, who could handle 1.4 TB of increasing disk space per day?

10,000x is an extreme example of principals that apply to any blocksize increase. If we increase the blocksize, it means less and less PCs are capable of actually running the Bitcoin software. And it won't take much before the exponential complexity of validating blocks eats everything.

Whether we like it or not, we are dependent on optimizing the Bitcoin network if we want any kind of scale. It will take time and it will take innovation. In the meantime, we have SegWit which enables lightning networks, which we can use for anything we don't need the full guarantees of a Bitcoin transaction for.


No.

Except for Lightning Network, which is this software that somehow routes transactions on a separate network, and it's not bitcoin; it doesn't have the same assurances, there are some third-party channels that route the transactions. (I admit, I haven't studied LN in detail.)

It uses bitcoin as a "backbone", sort of like you have SWIFT which is slow and expensive, and then you have credit card payments/paypal/whatnot that is built on top of it.

LN specifically requires segwit (the main issue of this "civil war").


> and it's not bitcoin; it doesn't have the same assurances, there are some third-party channels that route the transactions. (I admit, I haven't studied LN in detail.)

lightning clients pass around bitcoin transactions to each other, check the protobufs: https://github.com/ElementsProject/lightning/blob/2bf92c9063...

admittedly the protobufs have changed since i last looked, here's the canonical diagram describing the hashed timelock bitcoin transaction concept: https://github.com/ElementsProject/lightning/blob/2bf92c9063...

or read the paper https://lightning.network/lightning-network-paper.pdf

> LN specifically requires segwit

false, see http://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/o...


Thanks for the links. I was under the impression that LN needs maleability fix. (And I am not the only one who thinks that - see https://www.cryptocoinsnews.com/segwit-lightning/ )

I don't take FlexTrans as serious attempt at malleability fixing




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: