It seems like the basic assumption behind this kind of thing no longer holds; when systems in the past couldn't do power management, the idle time processing power really was wasted and these kind of thing simply utilized that.
Many modern systems throttle back their power consumption when the load permits, saving battery life or ac consumption, so this really does pass a cost on to the end user.
It certainly will impact the user experience on a laptop by draining the battery if you hang around on sites that use it a lot. I'm not sure how they can avoid this.
It would be nicer if they made it visible on the sites and had a global opt-out
I really hate the idea of "we'll do something not-so-nice and allow people to opt out". Most people won't bother, it doesn't scale (how many things do I have to opt out of, again?), and it's passing your problem on to me.
That said, "click here to donate your spare CPU cycles to awesomesite.com" would be just, well, awesome.
I never did it, but the joke between my friends and I was always that we should put the spammers we were getting on our site to good use. We always wanted a way to make them contribute to folding@home while spamming the site, but we settled for putting them into an endless loop of recaptcha instead.
This uses WebWorkers ("Javascript threads") to make your visitors compute stuff for you.
I imagine people travelling with laptops would be less than pleased (they do detect "mobile" browsers and shut off, apparently). I'm also not sure how useful an unreliable MapReduce node running Javascript is...
Seems like the computing ability you're getting out of these clients is dwarfed by the amount of transferring you're going to have to do in most cases. Unless you need to do like a second of CPU time per 50kb of data you're sending to people, this doesn't seem to make a whole lot of sense.
The idea of using web browsers are some sort of compute nodes in a distributed system has been kicking around for ages (I should know, I implemented one for my honours thesis almost eight years ago!).
The trouble with it is the limited type of work that it's actually useful for. For one, latency is a killer (we're talking people's home/work computer being used here) which means it'll only really work on embarrasingly-parallel problems, and secondly the inherent unreliability of the nodes themselves: a MapRejiuce computation will be terminated as soon as the user closes their browser window/tab. Unless it has some serious checkpointing or some other fault tolerance mechanism then I fear it'll remain, like all the similar systems that came before, better in theory than in practice.
I've been hoping for some crazy thing like WebCL to happen. I know it's nuts but, imagine how useful that could be for things like scientific apps and engineering apps. Little climate sim in your browser :)
Yeah I know, this isn't what browser are meant to do :) But
I've been looking a some algorithms for cuda and opencl to extract isosurfaces from volume data and, it could be cool to do so in a browser for medical imaging purposes.
Who needs C when javascript can do all that ;p
EDIT: In all seriousness, this could be really useful for game developers on mobile. This could help them get closer to native games in terms of performance.
This thing doesn't make sense to me. MapReduce framework is usually applied to super large data sets. Lots of researchers are working on how to minimize network IO either by increasing data locality or smarter scheduling.
Transferring all of those data to the client browser is a significant hurdle.
Many modern systems throttle back their power consumption when the load permits, saving battery life or ac consumption, so this really does pass a cost on to the end user.
It certainly will impact the user experience on a laptop by draining the battery if you hang around on sites that use it a lot. I'm not sure how they can avoid this.
It would be nicer if they made it visible on the sites and had a global opt-out