Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
MapRejuice - Distributed Computing at its finest (maprejuice.com)
47 points by chrisbroadfoot on Nov 24, 2010 | hide | past | favorite | 20 comments


It seems like the basic assumption behind this kind of thing no longer holds; when systems in the past couldn't do power management, the idle time processing power really was wasted and these kind of thing simply utilized that.

Many modern systems throttle back their power consumption when the load permits, saving battery life or ac consumption, so this really does pass a cost on to the end user.

It certainly will impact the user experience on a laptop by draining the battery if you hang around on sites that use it a lot. I'm not sure how they can avoid this.

It would be nicer if they made it visible on the sites and had a global opt-out


I really hate the idea of "we'll do something not-so-nice and allow people to opt out". Most people won't bother, it doesn't scale (how many things do I have to opt out of, again?), and it's passing your problem on to me.

That said, "click here to donate your spare CPU cycles to awesomesite.com" would be just, well, awesome.


I never did it, but the joke between my friends and I was always that we should put the spammers we were getting on our site to good use. We always wanted a way to make them contribute to folding@home while spamming the site, but we settled for putting them into an endless loop of recaptcha instead.


I agree - that definitely makes it feel more ethical.


This uses WebWorkers ("Javascript threads") to make your visitors compute stuff for you.

I imagine people travelling with laptops would be less than pleased (they do detect "mobile" browsers and shut off, apparently). I'm also not sure how useful an unreliable MapReduce node running Javascript is...


Yeah, we don't run it in Mobile browsers for that reason. There's other ethical questions to be posed (data usage, etc).

There's a previous discussion here if you're interested:

http://news.ycombinator.com/item?id=1645520


Seems like the computing ability you're getting out of these clients is dwarfed by the amount of transferring you're going to have to do in most cases. Unless you need to do like a second of CPU time per 50kb of data you're sending to people, this doesn't seem to make a whole lot of sense.


The idea of using web browsers are some sort of compute nodes in a distributed system has been kicking around for ages (I should know, I implemented one for my honours thesis almost eight years ago!).

The trouble with it is the limited type of work that it's actually useful for. For one, latency is a killer (we're talking people's home/work computer being used here) which means it'll only really work on embarrasingly-parallel problems, and secondly the inherent unreliability of the nodes themselves: a MapRejiuce computation will be terminated as soon as the user closes their browser window/tab. Unless it has some serious checkpointing or some other fault tolerance mechanism then I fear it'll remain, like all the similar systems that came before, better in theory than in practice.


+1 for the name, not so much for the idea.



Definitely a cool idea, but wouldn't a single high-powered GPU pretty much blow away thousands of browsers running JavaScript?


What about thousands of browsers using their GPUs?

http://learningwebgl.com/blog/?p=1828

I keep bringing this up in these threads cause I hope it will get the interest of some real webGL pros who could really improve on it.


I've been hoping for some crazy thing like WebCL to happen. I know it's nuts but, imagine how useful that could be for things like scientific apps and engineering apps. Little climate sim in your browser :)

Yeah I know, this isn't what browser are meant to do :) But I've been looking a some algorithms for cuda and opencl to extract isosurfaces from volume data and, it could be cool to do so in a browser for medical imaging purposes.

Who needs C when javascript can do all that ;p

EDIT: In all seriousness, this could be really useful for game developers on mobile. This could help them get closer to native games in terms of performance.


CPU and GPU are good at different tasks. For some jobs a single GPU probably will be much faster but there are some jobs where CPUs will still excel.


This thing doesn't make sense to me. MapReduce framework is usually applied to super large data sets. Lots of researchers are working on how to minimize network IO either by increasing data locality or smarter scheduling.

Transferring all of those data to the client browser is a significant hurdle.


Man, can you imagine if Google stuck this on their home page or on GMail for that matter?

Pretty wild.. wonder how that CPU would compare to the largest farms. Only works on ridiculously // problems though.


So my CPU spikes to 100% every time I visit the Google homepage? No thanks. That script would get blocked pretty quick.


Yeah, I don't like to run my CPU over 30% incase it runs out of cycles.


Google's toolbar used to optionally let you run folding@home through Google Compute.


say google has 450k servers (http://en.wikipedia.org/wiki/Google_platform) so has 11m hours of commodity x86 time a day.

if google gets 2937m queries a day (http://searchengineland.com/by-the-numbers-twitter-vs-facebo...)

then each query would have to do 13secs of work to be equivalent.

of course this is total nonsense (in many ways), just wondered how it would compare.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: