I've never, ever hesitated in giving out my email address to anyone (that includes sites that I know are scammers and would sell my email address to anyone practically for the asking). In the past ??? years (since GMail was in beta), I've received less than 5 spams.
(I know it's a little irrelevant to your point though, but wanted to say at least with gmail you don't need to fear giving you email address out)
ARM is introducing 64 bit chipsets in less than a year or so. The power savings will be worth the switch for cloud hosts. Most devices sold already run ARM.
I have to say a big [citation needed] to the claim of ARM beating x86 high end chips on performance per watt, at least on general workloads.
I think it's common to extrapolate Atom vs ARM to Xeon vs ARM in HPC, without thinking through the implications. We may well get higher performance/watt for single threads under ARM - I'm not disputing that, especially for integer work.
However, Amahdl's law is going to raise its head. In the same machine, a higher number of lower performance threads is going to cause lock contention. You'll also have to split computations over more boxes, since the absolute performance of an Intel server will remain far higher (by 2014, we're talking 64 core/128 thread Haswell). Both of which are likely to be a massive tax on performance.
To fight this, performance per core is likely to see a substantial rise, both in clock frequencies, and as a result of single core complexity. However, this will directly work against the two things that makes ARM performance/watt so impressive currently.
Also, Intel and AMD both are built around making those 100 watt scale processors fast and well. They really stumbled entering the Atom market; both because of a weak design (the chipset drew more power than the CPU itself!), as well as a lack of commitment (using 2-4 year old process nodes).
I think we're likely to see a similar teething pains with companies trying to enter the server market for ARM. The instituational knowledge just won't be there. Make a cache architecture that effectively feeds 64 cores? Way different to improving power drain on a mobile CPU, for the seventh generation. I expect it will be at least a few generations before design teams are fully up to speed.
I'm not saying we won't see certain workloads that are better off under ARM; memcached and static http serving are both likely to do well, since they're effectively just shuffling bits around, aren't particularly CPU intensive, and are embarassingly parallel. But I believe they'll turn out to be the exception, not the rule.
Which is to say, there's nothing magic about ARM that will let them beat x86 at the high end. They'll have to fight for it, and against Intel and AMD on their own turf no less.
[I copied this from a post I made a few months ago after I realized I was typing out basically the same thing]
Why do you assume ARM-based processors will necessarily have a higher perf/W than x86? This has been a common claim (usually because of the perceived size of the x86 instruction decoder), but Medfield has proven that to be false:
Benchmarks and lies, yada yada. Atom wins on some things (in particular it tends to kick the A9's butt on Javascript benchmarks, which rely on single threaded dispatch and high clock speeds) and loses on others (it's a single core with hyperthreading, where most ARM SoCs are dual core).
Actually depending on the benchmark, the low-clocked Ivy Bridge CPUs tend to do quite well in "performance per watt" vs. ARM SoCs too. They lose big in idle power, but under load those enormous L3 caches and the uOp cache can give them 2-3x the performance per clock of the in-order A9 (and they run about 2x as fast, and draw about 4-10x as much power at peak, so it actually comes out very (!) roughly even).
ARM has a long way to go before they are legitimately competitive in the server space. But Intel still isn't anything more than "broadly competetive with 2-year-old devices" in the mobile world. Over time I'd expect the architectures to converge from both directions, but I don't feel lucky enough to guess at which one will "win".
oh, I'm not claiming that x86 is going to win in mobile or anything like that. My only point is that based on devices shipping today, the choice of ISA does not cause huge power efficiency disparities and that there's no reason to think that will change going forward.
It is very unclear. Performance per watt on server workloads in currently shipping ARM hardware is very poor, and IO performance is terrible. Waiting to get hold of some of the new server designed systems (Calxeda etc) once they actually ship. Most ARM hardware is two process generations behind Intel. I don't think 64 bit helps that much. A large memory system will consume much of its power in the RAM, so the difference between ARM and Intel per Gig is smaller, especially as no one is likely to stick 1TB in an ARM system so you are likely to get more CPUs anyway.
It is possible it will work out, but theory and reality are so far off now that I am keeping my options open, was going to start a business in this space but very much keeping the options open at this stage.
They are right now in the stages of removing proprietary code towards a modular architecture. I think it's more likely they could implement libvert as a plugin. The focus isn't to make a pluggable architecture for vm emulators(libvert). But to make vagrant itself more robust, by removing it's ties to virtualbox with a plugin system.
> We start it, early adopters. The mass picks up on it after a few years, only if it gets started in the first place.
Which, coincidentally, is a LOT of what you can see happening these days with "social networking" and "cloud computing" and all those other hypes and trends that basically existed very well in the 90s and before but now the public is "catching up" with technical developments.
Can you provide further reading on that? I'd love to find out more about how exactly dark energy "absorbed" Einstein's ideas, and I'm sure I'm not alone here.
AS3 -> Dart -> JS, a bit much don't you think? I'm not sure who is trying to use a web stack three levels deep and dart based no less. I'm skeptical of how long dart is going to last at google, but this is definitely an impressive piece of work regardless. If it was AS3 -> JS and worked as well as it does now, i feel the web would collectively lose its shit. Unfortunately though this is tied to google, and i don't think many people want to be stuck being supported solely by google anymore.
think of all the AS3 developers starting to feel left out in the cold now that Adobe's walking away from Flash.
while i'd agree that now's probably the time to jump into JS and HTML5, some people think Dart has a future, and I guess this opens up its potential market.
Adobe is walking away from the mobile player but still has Air out in the wild wich produces a lot of good chart topping titles on the Android and iOS app stores.
In my opinion, it is better to pick up Haxe if someone is coming from AS3. You can use javascript for the HTML5 compile target, you get extended Mr. Doob three.js support provided by the author himself, node.js through the Noxe Lib extends, and much more. And that is just for the HTML5 target.
It even target compiles to Android and iOS through the Haxe NME framework.
They are adding Java support to the thing and it has been doing c++ and php for some years. There are no silly ";" fights and honestly it is good to be able to use the same code source to run on multiple plats.
Though Dart may well pick up and Easel is already becoming a thing.
agreed. Dart even though it has some awesome features, is still a technology preview, so I also wonder when it will be "stable" (some apis aren't "finished" yet) and it's a Google only thing. On the project's github page, you can read "The problem with JavaScript is that it sucks." ... JavaScript sucks if you don't know how it works, just like another language ... and what's the point of writting this when Dart itself compiles to JavaScript (who the fuck uses Dartium as their primary browser), finally the demo throws 18 errors ... fail.
It should be out of technology preview later this year.
> JavaScript sucks if you don't know how it works
It's possible to understand JS and want better. I think the people behind V8 and Dart know how JS works.
> what's the point of writting this when Dart itself compiles to JavaScript
What's the point of writing anything in any language? The Dart VM will be included with Chrome and other browsers are free to include one, too; Dart->JS is for browsers that don't and the goal is that it'll perform similar to hand-written JS.
> who the fuck uses Dartium as their primary browser
Nobody. It's for development until the Dart VM is included with Chrome.
Exactly -- that's why this is exciting, particularly the Flash / interactive part. The authors have done a very good job with what they have to work with, with the assumption that they're only working with a preview and that the primary medium will be a VM with a JS fallback. The performance, all said, is very, very good.