Hacker News new | past | comments | ask | show | jobs | submit login
Redis is now available on Duostack (Node.js/Ruby Platform) - free invites inside (duostack.com)
35 points by daverecycles on April 11, 2011 | hide | past | favorite | 17 comments



All of the invite codes in the post have been used, but here's some more - sign up here https://www.duostack.com/users/new: raxj-qfur raxq-sbhs ceku-huie lcpv-kxai gpum-nxxu tveb-qswy iblu-iedf swil-fjpp auxc-ckqw nwrk-frgf


All used up. Some more: dybc-arne jjvq-ximp jkef-ctcu cctp-qqxu uvmh-aduv


Just a random thought, but have you considered graphing the order in which invite codes are used (or attempted)? I'm curious what the optimal strategy would be to try to get one (start from the top, start from the bottom, start from the middle?).


Yeah, that crossed my mind actually. That would be pretty interesting. Anecdotally I can say of the first batch, they went 3, 10, 9... (don't know the rest, they went fast).


Interesting thought. People are definitely not going top down... for example, two of the first invites used were the 3rd, and the last.


Thanks for releasing the invite codes here on Hacker News, Dave.


Duostack automatically manages horizontal scaling of your app and vertical scaling of your database.

Auto-scaling is a killer feature, platforms like Heroku or DotCloud do not support this.

Can someone explain how this will work, and the pricing? It looks like it's a matter of setting "Instances" and "Connection Concurrency" but the docs are WIP (no explanation on how the latter is different from the first): http://docs.duostack.com/ruby/paid-features#pricing


I'm don't know how DuoStack guys are scaling this, but horizontal scaling of your app is pretty trivial (for each Y req/s launch X instances of the app with minimum of Z instances running at all times).

This is how Google AppEngine does it (btw: you can do it with nginx + ngx_supervisord + supervisord).


Interesting, thanks! But here is it's baked in (like appengine). Isn't the point of PAAS like these to let someone else worry about the sysadmining?


Yeah, definitely... At least it should be :)

However, I've been closely observing this space for the last 2 years (from both: infrastructure engineer and application developer perspectives) and I'm a bit disappointed with the current state of art.

Unfortunately, majority of the existing PaaS providers are "cloud" equivalents of "one-click installers" and/or "managed hosting" from the web 1.0 era... Pretty much all what they are doing boils down to daemon installation and provisioning. They are also charging for unused (but allocated) resources, which should be forbidden in the cloud era.

Of course, there are some exceptions :) Two of them being:

- Google App Engine - pricing per CPU-time and bandwidth usage, with horizontal auto-scaling based on request rate. But they took it a bit too far with their Datastore, to the point that you need to write apps taking it into account from the beginning.

- SQL Azure - highly available and fault tolerant version of SQL Server.


We have a novel approach to autoscaling and we'll make an announcement later in the beta with details. :)


I just got an invite. And, I am reading through the docs. It seems like background job is not available currently, right?


That's correct, but it's one of our highest priority feature additions. We'll be adding them soon.


hmm.. decisions , decisions now I really cannot decide who to go with for Node hosting, I'm crossed between Nodester,Dotcloud,akshell and now Duostack, guess the auto horizontal scaling is a huge plus.


If you need a Node platform, it's a difficult choice indeed.

On the other hand, if you want one platform for all your projects, whether they're Node, Ruby, Python or PHP apps... Then the choice is easier :)


What about no.de? ;)


Very nice. I've been using Heroku off and on for demo purposes but never for production. I'd like to see if Duostack can make me feel more comfortable with that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: