If you took all the Bitcoin mining machines and made them work on science and engineering tasks they would produce hardly any meaningful output compared to the Top500 systems. The main reason is that Bitcoin mining is embarrassingly parallell while most technical computing algorithms require large amounts of communication. The difference between a Top500 system and a standard cluster is the interconnect.
The largest share of top500 machines use Gigabit Ethernet (http://www.top500.org/statistics/list/), which is a reasonable indicator of an embarrassingly parallel application. While that's not quite as slow as the public internet, it's possible some of those applications could fare well with increased parallelism to make up for public internet connectivity.
Bitcoin's computational power numbers aren't relevant for another reason though: it's entirely confined to computing sha256, and only in a special way...
What you want to look at is performance share and not system share. By performance share Gigabit Ethernet is just 12.6%. There are a number of Gigabit Ethernet systems but they are not contributing useful work.
There has also been some discussion lately that Rmax, the performance indicator of Top500 has started to diverge from sustained performance. Sustained performance is a general term for actual science produced with a machine as opposed to performing well on the LINPACK benchmark.
Edit: There is also a huge difference running Gigabit Ethernet over short spans as opposed to over the Internet. Latency as opposed to bandwidth is the limiting factor for many algorithms. Light takes a lot less time to travel across the aisle as opposed to across the continent.
What about stuff like decoding human genome? A master just has to give out specific sections to each slave and slaves don't need to communicate with each other. Any number of those "scientific games" type of problems can be solved with the bitcoin mining network machines.
Most institutions that have a real supercomputer also has clusters that are used for tasks that does not need fast interconnect. The reason is that there is quite some tasks that does not need fast interconnection, and for those a cluster is OK, and a cluster costs a fraction of a supercomputer.
Tasks in bioinformatics "decoding human genome" is typically of this type, and are generally performed on clusters, even if a supercomputer is available. Where I used to study, applications for CPU time at the supercomputer for workloads that could run on a cluster was generally rejected, and directed to the clusters instead.
Because being good at embarassingly parallel problems is not something to be particularly proud of - the real, hard problems (for which the supercomputers are needed) can't be so parallelized, and the performance & quality of such systems depend on how well they can handle problems that need much more interdependencies between computing nodes than bitcoin.
"Embarrassingly parallel" is a term used to describe problems that can be easily divided amongst processes and require little to no communication between processes.
Because the payloads are tiny and require almost no coordination compared to many algorithms.
Machines take small work packets and crunch numbers, then pass back results in minutes. For many clusters, various steps of the algorithms require fast messaging for things like MPI.
That shows that the computing capacity required to subvert the bitcoin network is significant - even if the bitcoin-integer-calcuation to FLOP conversion is wobbly, if all of the top 500 (known) supercomputers together cant even get close to the 50% of mining capacity required to manupulate the blockchain - that's a good sign, right?
But... I wonder just what sort of non-public computing power is hidden inside .gov and perhaps .mil domains… I'd be surprised if "they" didn't have machines/clusters that'd blow the "Top (publicly known) supercomputer" out of the water. Whether "they'd" have the combined capacity of the top 500 or not I'm less sure about.
What you say about secret sites may once have been true.
It is almost certainly no longer.
It is unlikely that at this point, the NSA or others have as much general purpose computing power as, for example, Google.
Why would they?
They don't need that much general purpose or floating point performance anymore, because it's reasonable for them to just fab their special purpose own stuff.
I'm sure they have some semi-specialized architectures made for doing their most important analysis tasks very quickly (which may or may not be fast in bitcoin-relevant ways), and for everything else, they have a smaller number of general purpose datacenters.
I can't think of any good reason the NSA would have 1-5 million x86-64 computers laying around, instead of 1-4 million more specialized processors, and a million x86-64.
The article's comparison is invalid - in HPC computing speed is as much about the network infrastructure as it is the raw number of cores; simulations and graph-processing require interaction between the processing elements. The scores on the Top500 reflect this. In an embarrassingly-parallel application such as bitcoin mining, their peak performance is far higher.
In fact, the theoretical peak of Titan's 19k GPUs would be around 90 single-precision petaflops, comfortably higher than the estimated peak of 2 million reasonably recent x86 processors in Google's data centers (unlikely to be top-of-the-range number crunchers).
I've worked in HPC with a variety of actors for over a decade; I am in no doubt that classified machines exist with more power than Titan and Sequoia (#1, #2), which together make up most of the computing power in the top500 (it follows a power distribution, appropriately).
An exaflop is still a really, really big number though. I can hazard a few guesses at non-public machines in the US that would reach or beat Titan. Perhaps the US Govt commands an exaflop of power spread amongst several agencies, but I wouldn't place any bets on it.
That the bitcoin mining network has reached this scale is both astounding and depressing. That's an awful lot of computing power going to waste.
It's in the design of bitcoin -- its network power must grow enough to defend against attacks on the increasingly lucurative bitcoin market. The bigger the market, the more expensive attacks are worthwhile. Conversely, the hardware driving bitcoin could be part of its valuation.
The hardware working in the bitcoin network is it's capital backing. The more hardware backs it, the more resilient bitcoin is to attacks.
If we valued one bitcoin at $1 million today, we would have a problem, because the hardware required to attack the network would be much cheaper than the potential gains of an attack.
It is not capital backing in that I cannot exchange a bit coin for a computer. Agree it may provide an upper bound on the value of bitcoin, but not a lower bound.
> Why would they? They don't need that much general purpose or floating point performance anymore, because it's reasonable for them to just fab their special purpose own stuff.
Yes, they have their own fab, but I looked up what I could a while back on that fab ( http://www.gwern.net/Slowing%20Moore%27s%20Law#fn23 ), and everything indicates the NSA chipfab is outdated, using very large-size features, and essentially meant for legacy system support - making old hard to replace chips, perhaps. If they're doing anything fancy, it'll be at partner chip fabs... but so many chip fabs have already left the USA that one wonders.
I'd imagine the economics of variable / nonrecurring engineering costs are different at that scale. Their tech might only let them make 200MHz chips instead of 2GHz, but couldn't they just make ten times as many of them?
It's been a long time since CPUs have seen much of a bump in clock rates, after all - nowerdays it's all about smaller features allowing more cores/cache in the same space. If you don't care much about space as your process costs are dominated by setup rather than wafer costs, wouldn't you just use more space?
Let's say the NSA's chip fab is 10 years out of date - the same technologies as an Athlon XP "3000+" 2.1 GHz CPU with a 70 watt TDP. Worked just fine in a lot of consumer desktop PCs.
If they needed a lower TDP, couldn't they just drop the clock rate and use wider features for lower gate leakage?
Without commenting directly on the fab, any codebreaking hardware could be expected to excel at computing bitcoin hashes.
Perhaps more interesting is the codebreaking ability latent in the bitcoin network. That must be a tempting target for a variety of agencies around the world.
They way I understand it is that the money is in specialized hardware. Today already using GPUs is counterproductive for mining. The money made doesn't pay off the electricity used. So the era of specialized hardware has come. The .gov or some such entity with large black budgets could be in a better position to spin their own specialized hardware that doesn't nothing but try to manipulate the block-chain.
"They" don't need to outpace the bitcoin network at its current capacity. All "they" need to do is add enough network capacity so that bitcoin mining with current technology becomes unprofitable. Then many other players who depend on the profitibility of mining will stop.
For how long do you need to hold 50+% to manipulate the blockchain?
For example, Amazon or Google also hold serious computing power - if they would throw all of it at once for an hour, then it wouldn't be THAT expensive compared to the effect.
It would be interesting if the bit coin network could eventually move to doing useful work like folding@home does. That way mining would not just be a waste of resources. Obviously, the current ASICs wouldn't be able to switch.
Bitcoin calculations right now is not really a waste of resource. It's used to "audit" the transactions to make sure no one can double spend any money.
The majority of work done on btc is useless though, because very few hashes end up validating into blocks. It would be nice if all that compute power was being used for some productive use, and you just did a consensus raffle rather than arbitrary hashing to pass out new coins.
Proof-of-work is not useless if it prevents other, undesirable outcomes that would otherwise occur without it.
Make no mistake, though - the large mining pools are the spacing guild of bitcoin; without them, there is no network.
Unfortunately they (via their users) would not welcome such a switch after so much time and effort and money has already been expended to be able to increase sha256 speed to the point it is at now.
Perhaps this is why Litecoin chose scrypt instead?
Regardless, the proof-of-work we have in Bitcoin today is likely what we'll have in Bitcoin forever, like it or not.
When a lock only stays shut as long as it's the biggest, and everyone is pouring thousands of tons of molten steel in just to keep the lock big? Hell yes it's a waste of resources.
A single lock is much closer to encryption than it is to 'race the world' levels of proof of work.
When compared to digging huge quarries into the earth, physically extracting gold ore, processing it and shaping into blocks then burying it back underground again (in vaults), it's not actually that wasteful.
If there was no other way to secure the advantages of this hypothetical big lock, and those advantages were as significant as Bitcoin's, then it wouldn't be a waste of resources.
Keep in mind that the entire Bitcoin network could be replaced with one trusted party with the computing power of an average smart phone. The network is currently paid 150 Bitcoins every hour to be that party, and in an efficient market, almost all of that would be spent on mining, i.e. wasted.
The security of Bitcoin against double-spending is literally based on wasting so much money that it is unattractive for an attacker to spend a matching amount of money on a double-spend attack. The network must spend this money all the time, though - it can't know in advance when it is being attacked.
An attacker with enough resources can also force the network to either match their spending, or be rendered useless.
Bitcoin is an inherently wasteful system, and it actively resists scaling. There are alternatives, the most proven of which is a centralized ledger run by a trusted third party.
Even if there were no viable alternatives at all, I would still have doubts about the sustainability of the current system. The cost of running the network is just too large compared to the amount of real economic activity.
I think you underestimate and/or understate how big of a deal Bitcoin's lack of reliance on a trusted third party is. That's essentially the entire point of Bitcoin, so it seems a bit disingenuous to call it "wasteful." Perhaps if you have no desire for a decentralized transaction log with no trusted third parties, then it would be wasteful for you to throw computing resources at Bitcoin, but it's ridiculous to apply that generally.
Still, think about how the proof of work operates. There is no connection between the amount of computation needed to prevent attacks and the current block reward. Therefore logically the amount being spent on mining is very probably far too high or far too low. It's possible that it's too low, and bitcoin could be taken out by a government body. I personally think it's more likely to be too high. As in, X attack only needs to cost $1M to keep the network safe, but the current mass of miners makes it take $10M. The other 9 million is truly wasted on the tragedy of the commons.
To go back to the silly analogy, you need a 20 ton lock but you can only use 'cost plus' bidding and all the contractors keep making the lock bigger until they get every possible cent out of the process.
Hardly. Bitcoin ran just fine on CPUs. The only reason nobody uses CPUs to mine any more is because everybody else switched to GPUs, which resulted in a difficulty adjustment. In other words, competition for bitcoins upped the required compute power, not anything inherent to producing bitcoins themselves.
Do 500W GPUs play any part in this? No. The bitcoin client is a standalone app that can run on any machine. Mining (generating hashes) does not, to my knowledge, actually operate the network.
Hell, if bitcoin needs 1000 petaflops just to operate the network when it is still a fringe currency, how exactly is it supposed to scale to mainstream use?
The computing effort required by mining is almost completely decoupled from the actual number of transactions. It's designed to scale up with the available computing power, that's why it has grown.
> Mining (generating hashes) does not, to my knowledge, actually operate the network.
You're incorrect. The proof-of-work requirement is integral to the Bitcoin network, because it makes fraud unprofitable. The amount of computation required to create a block chain longer than the honest one should cost more than the potential benefits of doing so. That said, as far as I know, any proof-of-work algorithm could be used as long as a large portion of clients adopted it, so it should be possible to use work that is useful in itself.
So what you are saying is that Bitcoin will always require a horrific amount of computational power, just to prevent fraudulent generation of blocks? I'm starting to like the idea of mainstream Bitcoin less and less... 1000 petaflops just to maintain the network? Does that not raise the eyebrow?
I don't think it's "horrific," and I think the phrase "just to prevent fraudulent generation of blocks" vastly understates the awesomeness of having a virtually fraud-proof transaction log without relying on a centralized party.
Well, if Bitcoin goes mainstream it seems like we could expect it would require 51% of all computational power on the planet at all times, which would indeed be horrific as well as tragic, IMO.
Sure, the perfect currency is valuable. But is such a sheer brute-force approach to security the best we can do?
What makes you think it would require 51% of all computational power? That would only be true if there were no other valuable things to compute, which is very unlikely to be the case.
I don't think you really understand how the protocol works. The transaction rate is rather small and handled entirely by general purpose CPUs. The proof of work uses a fixed-size input made by hashing all the transactions in a block.
After reading the article above, I had an idea for something similar to folding@home. Essentially a business that pays regular people $X to register as a compute unit and charges businesses to use that compute infrastructure for massive map-reduce type jobs.
The trick would be charging just enough to financially compete with something like AWS, and paying just enough to compete with bitcoin mining.
Or it could be a stupid idea - I don't know. But I wouldn't be surprised if someone smarter than me was able to find an angle that'd make this profitable (or already has?).
> Essentially a business that pays regular people $X to register as a compute unit and charges businesses to use that compute infrastructure for massive map-reduce type jobs.
It's been done. That you've never heard of the company doing this indicates how successful it's been.
This has been tried several times over the last 15 years (e.g. United Devices); in general the idle time on random PCs is worth less than the overhead of organizing it.
Is this still true though? Lots of businesses are spinning up instances in "the cloud" for big compute jobs and then shutting them down. What bigger cloud is there than the millions of idle internet connected devices?
I've only invested about 5 minutes thinking about it, and previous failures are probably a strong indicator that the idea isn't profitable. I'm just questioning whether previous attempts were premature.
There is a market for un-utilised compute (bitcoin/torrent) and a market that requires compute (AWS/Azure/Etc). An intermediary to join these two markets seems like an opportunity (naively).
Nobody wants this more because it is a security nightmare than anything else. What do you need to distributively compute but is mundane enough you are willing to let random people know the data and algorithms?
Scientific research with volunteer computers seems to be the one case where it makes sense. Perhaps scientific research with paid computers would also make sense? There I think the price point is wrong to incentivize anyone to contribute resources.
I have heard ideas of having people operate server racks to help heat their homes. That way the compute power would be worth it, and the heat would be useful. People could rent the server racks, and be paid for doing the computations, covering the cost of electricity, plus get a cheaper heating bill.
To really be practical it would require useful homomorphic encryption and fast internet speeds, though.
I actually wonder if something along these lines is the real purpose of amazon coins. Amazon is one of very few companys that I have a default assumption that the crazy stuff they do is part of a long term strategy, so the coins stuff is interesting to me. I doubt that the rather superficial analyses we've heard so far are the full story.
They're not computing numbers for the sake of computing numbers.
Imagine a future where Bitcoin dominates the markets. Then you'll have this whole planet dedicated to the computation of hashes for no other sake than computing hashes. Picture yourself an alien civilization, coming to meet us and seeing we spend most of the energy we produce computing numbers and giving them an arbitrary value.
While I think there's a lot of arbitrary stuff going on in the financial world, it's still less arbitrary than computing numbers for the sake of computing numbers.
The bitcoin network isn't just computing numbers for the sake of computing numbers. Proof-of-work is integral to the "decentralized network of trust" that is bitcoin.
Also, even if bitcoin became the dominant global currency, I highly doubt that it would take up a noticeable portion of the world's computing resources.
That's the really interesting bit about Bitcoin - it's an attempt to create a system that will enable trust in Internet-scale groups.
Currency is obviously a prime candidate for "trust", but I also think Namecoin is tremendously interesting. Next up, PKI certs? Eliminating hard-to-scale single points of trust (whether the DNS roots, VeriSign or governments) is hardly "doing nothing".
I haven't taken the time to fully understand it, but I gather that there are proposed ways to trim the block chain's history. For clients that don't need a long history, I imagine they could just download a snapshot for, say, Jan. 1, 2013, that simply listed all the bitcoins and their owners at the time, rather than all the transactions that happened before that point.
I agree with you. The idea of "mining" these numbers using an ever-increasing amount of computing power is simply absurd. They'd have done better to allocate the coins in some other way .. I don't know how, maybe distributed equally among Hacker News readers? That would be as good as anything, to get the system started.
You misunderstand the purpose of mining (which is really a misnomer). It is necessary to do a distributed, hard-to-manipulate verification of transactions. Creating new bitcoins is just tied to it as an incentive. "Mining" will continue after there are no more new bitcoins. The incentive will change to transaction fees.
Bitcoin will always number crunch. Transactions are processed in blocks, even if there is no or minimal reward for solving the block. Income can be gained from transaction fees.
This is something that has been in my mind for some time: If I understand correctly, the bitcoin generating process is based in computing SHA's.
So, I wonder if those computations could be used for something else? Maybe having a database similar to Rainbow Tables, which allows people to look for previously calculated SHA's to avoid calculating them...
Almost any proof-of-work algorithm could be used. However, if you add an algorithm which is useful to the outside, it decreases the cost of attack (since you earn something probabilistically on top of simply a block reward). Depending on what kind of algorithms you use, this could be dangerous for network security.
Haha, aside from the obvious problems of storing 2^256 hashes somewhere, having a database system that can handle all of the btc client connections, and undermining the actual proof of work system behind bitcoin's credibility, you'd be creating a useless rainbow table that didn't have any common cleartext passwords hashed in it.
They're making money out of thin air. Useful to, you know, people who haven't been able to do that before.
Meanwhile, considering the accuracy of NOAA's WX machines via-a-vis the Europeans. Useful? How about all that nano-trading on Wall Street ... not useful? Blizzard's money machines must be useful to their tens-of-millions of consumers.
People spend an extraordinary amount of time and effort digging up precious metals, much to the dismay of local populations that are usually uprooted to facilitate this.
In terms of environmental impact, Bitcoin ranks pretty low on the scale. More energy is wasted with people tuning in to "Dancing with the Stars".
The author claims the network is dominated by ASICs. Sure, Avalon has made some chips, and Butterfly Labs has released a handlful of their miniature ASIC devices, but I hardly think they dominate the network.
FPGA's are still the largest contributors I would think. Funnily enough the author didn't even mention them in his article.
Computing SHA hashes is embarrassingly parallel, but LINPACK is NOT embarrassingly parallel. LINPACK, like many scientific applications, requires a fair amount of communication. Comparing hashes per second to LINPACK scores is really apples to oranges. A better comparison is to simply add up numbers of similar GPUs and CPUs and ignore all the benchmarks.
Bitcoin enthusiasts should also consider that the Top500 is not an exhaustive list; governments have much more computational power than is on the list.
Bitcoin algorithm should be rethought in order to be used for valuable computation not just useless hashing. At this point it's a waste of energy, but that doesn't mean we can't turn it into something useful. I believe we really need to ask ourselves: "What are we trying to accomplish here?". Are we trying to create just a digital medium that promotes the same old selfinterest, or are we trying to create a new kind of system that transcends the individual?
The ultimate thing would be for a cryptocurrency to be based on thoughts/comments/ideas, but nobody has quite cracked how to quantify the value of those yet. So the best idea in this space I've managed to get to so far is to have the network process big data to find meaningful patterns.
Not only could the value of the big-data-backed cryptocurrency grow in value off the speculation, you could charge companies to upload/stream their data into the network and essentially have a giant distributed supercomputer processing it and searching for patterns. Then share a percentage of any revenue with the miners as an added bonus.
I feel we're already at the point where more data is produced than our ability to process, understand and extract useful insights from that data. And the rate at which global data is produced appears to be on a never ending exponential growth curve.
So in one swoop you could lay the foundations for a global distributed currency whilst harnessing the collective computing power of the Internet to solve humanity's problems, gain tremendously powerful insights into human behaviour at all levels from micro to macro, make us more efficient as a species and eventually the potential to become the global AI that feeds off all data streams; constantly feeding in, being processed, recorded, analysed, and subsequently feeding the computed knowledge into agents that make real-world decisions and actions.
If anyone has thoughts, skills or interest in this... hit me up.
> So in one swoop you could lay the foundations for a global distributed currency whilst harnessing the collective computing power of the Internet to solve humanity's problems, gain tremendously powerful insights into human behaviour at all levels.
Exactly, I think it is possible to create a distributed cyptocurrency and use that computation for something useful in the same time.
> Are we trying to create just a digital medium that promotes the same old selfinterest, or are we trying to create a new kind of system that transcends the individual?
So, currency is more than anything the medium that enables trade. Trade enables specialisation and cooperation on a massive scale. If the scope and benefit of global trade doesn't "transcend the individual", I struggle to think of anything that does.
The classic "I, Pencil" essay[1], while written even polemically in support of capitalism, also serves as a great example of the scale of human cooperation enabled by currency.
Yeah, trade transcends the individual, but profit based capitalism does not. If your utility function is profit, you will get egotistical behavior. What we should really try to maximize is the number of smiles in this world, because that, I think, is a better measure of the success of a society. We should really strive to feel better not to have more. Money is an instrument that favors egotistical thinking, counting what we have, but funny enough, we seem to feel better when we give more, not when we get more.
That makes no sense. Self interest is why it works.
Adam smith said it succinctly: "It is not from the benevolence of the butcher the brewer, or the baker that we expect our dinner, but from their regard to their own interest. We address ourselves, not to their humanity, but to their self-love, and never talk to them of our own necessities, but of their advantages."
As for "maximising the number of smiles" and "strive to feel better, not to have more", there is research that suggests that the way to do that is to become richer:
It's not "useless hashing". The computations are neccesary to secure the network and process transactions. It's not possible to just swap the algorithm to do something entirely different and still achieve the goals of Bitcoin network.
The only way using masses of computing power gives you security is by making the 51% attack harder.
To me the whole thing is a manifestation of the bizarre way BTC are handed out, by competitive mining. (No, I don't have a better solution, but can't you see the issue here, the same amount of BTC are created but ever more power is being used)
In the case of a botnet you would have to consider whether you go by the number of calculations each machine a bot runs on can perform or whether you consider the number of calculations each bot can perform with the risk of detection remaining insignificant.
Realistically you would have to go with the latter in that case I don't think the computing power of most if not all botnets would account for much.
Not even close. According to http://realtimebitcoin.info/ about 5 MW are used for Bitcoin. According to http://en.wikipedia.org/wiki/Cost_of_electricity_by_source that would cost somewhere between 250 and 1000 USD per hour. So at most it's about 8.7 million USD (or 0.7% of the total market price of all bitcoins) per year. That's not much at all to support a functioning currency (whose true macroeconomic value is in enabling the exchange of goods and services, not the market price).
An interesting aspect of Bitcoin is that the amount of computation it requires is designed to scale up with the amount of available computation power, independant of the amount of transactions. Miners cannot increase the rate at which new bitcoins are "mined" - geting more computation power only increases their share. Thus it makes no sense to spend more money on the hardware and electricity for mining than the (basically fixed) rate of newly mined coins are worth.
If you took all the Bitcoin mining machines and made them work on science and engineering tasks they would produce hardly any meaningful output compared to the Top500 systems. The main reason is that Bitcoin mining is embarrassingly parallell while most technical computing algorithms require large amounts of communication. The difference between a Top500 system and a standard cluster is the interconnect.