The concept was to rethink the way we browsed Craigslist and shopped online. The main section, the "homepage" is a tree map of recent craigslist posting sorted by popularity using a slightly modified version of Hacker News' ranking algorithm.
The newest page is a masonry view of the realtime stream of the craiglist posts as they are added to the system (much like Hacker News' newest page.)
The top page is the masonry view version of the homepage.
The game page is a FaceMash-like game for the posts, which influences the overall scores and popularity of items.
You'd be surprised, but our entire service does not make a single request to the craigslist servers.
We get the craigslist postings from the 3taps API, which themselves scrape Google Caches to import that data.
The images are typically hosted on services like imageshack or flickr. We store everything in our own database and keep track of our own version of the data, which we stream from the 3taps firehose.
Right. Fair enough. But, even then, the requests come from the client browser, rather than our server, which is much less likely to be a problem to CL.
I don't understand the Game part of the website. How does picking one item among a pair of two completely different items (e.g. a motorcycle helmet vs. Castor oil) influence score?
Considering it was a hackathon, maybe the algorithm you developed isn't complete to show two similar objects...unless you intentionally chose to do it in the manner you did. Just curious why.
The images are just pulled from a single category on craigslist. So we currently have no way of separating out the motorcycles from the motorcycle helmets, without further filtering.
This is pretty cool. I am currently shopping for a motorcycle and it is much nicer to be able to browse the images. I do wish that it didn't gray out the image on hover though.