Hah! tectonic and I applied to YC with almost exactly this in 2009?!
We went as far as building a browser-based IDE-like environment for generating these, and a language called parsley for expressing the scrapes. If you're interested in this, you could check out some of our related open source libraries:
Oh! Selector gadget is wonderful, and I still use it to this day. It's so easy to forget that there are real people behind the tools we use - so fizx and tectonic, I thank you!
Selectorgadget is great; without checking out your creations in real life, did you consider taping selectorgadget to a proxy so you can scrape sites and store the paths you found in one go? That would massively enhance the process imho :) Maybe Apify can do that, but I hope they put that in github as well; i'm not a great fan of closed source/cloud development tools.
Please do use selectorgadget! And if you'd like to push anything back, or chat about parsing in general, send me a message. I have a more advanced branch of SG that can generate better selectors, but I haven't pushed it out yet. It's all in the repo.
Thanks for suggesting SelectorGadget. I ended up building an in-browser firebug+firefinder clone to make the app even more easier to use. Please check out Preview webpage for any API
I wish HN had a API which could write - the hnsearch one was only read-only when I used it. I tried writing a tool for HN (https://github.com/pbiggar/hackerite) that needed to be able to upvote stories, and although hacks existed to make it work, it wasn't a very pleasant experience.
My problem with HackerNews API (having done something like this -- the Hacker News Filter on Github) is that you get throttled after you hit a certain number of HTTP requests and your IP gets banned for a certain amount of time.
So as nice as this is, it simply won't work here for the many people who would like to use near live data on HN.
The page contents are cached and page is fetched every 1 hour. In fact cache is expired only for HackerNews API from the backend. Expiry feature is still not pushed
This API is extremely unreliable and has a great deal of functionality missing as well (commenting, voting). The original developer is also no longer maintaining it. I tried to build an Hacker News app using it some time ago and abandoned the idea very quickly.
I have been using it for a few years now. It does give a error quite often (1 out of 10 requests on average), but for what I'm using, it is pretty solid (as long as I retry when these errors occur).
The lack of functionality: it had it. The problem is that it not only required user/password, but it also was caught into HN's safety net, that prevents multiple accounts from the same IP to do a lot of stuff.
So it can work as library, but not as a server-side API. Therefore he removed it.
For another project of mine[1], I used Hacker News search API[2], which is really consistent, and really powerful, and is maintained by the the yc company that does ThriftDB[3]
The reason that it's so unreliable is that my server's IP address gets banned by YC when the requests are to fast, and I have it hosted on a small cheap server. There are ways around this, but it just isn't worth the time or money.
Can you add support for taking existing JSON API (rather than scraping HTML)? This useful for APIs that are neither accessible with CORS nor JSONP, APIs that are provided by incompetent mental midgets who don't answer emails or participate to their Google Group (cough MBTA cough).
If i understood correctly JSONP supports only GET but CORS supports all http methods. APIfy can only provide GET Requests (Index and Show) over a static html page
That's one of the advantages of CORS. The other advantage is it allows cross-domain XMLHttpRequest requests in the browser, which will have better error handling than JSONP.
So, it's basically a web-scraper, but with a JSON API. The API input is limited to a single parameter, that indexes the record to be scraped. The API output is taken from that indexed record, consisting of a set of scraped elements within that record, and presented as JSON, with attributes named as user specified.
Although this is limited to a list of renamed records, it could be extended (if needed), and I really like the concept and UI implementation. Feedback: As someone who has never used css, I found it very tricky to even duplicate the tutorial: selectors are sensitive to leading and trailing spaces; the selectors given in the tute aren't what's needed (and see BTW below); and often "API call failed: Internal Server Error" indicating a problem with the selector, but not what it is, and ATM service is often "unavailable" :), it's slow switching back and forth between "edit" and "test" (why not include testing on the same page? like HN comment edits: textarea + rendered result); when an attribute is removed, it remains in the JSON (code eg http://apify.heroku.com/resources/4fcb26d7a06a160001000024); it takes a long time (30s, 1min) to get a result. I hate to say it, but it's like my experience with ruby: it takes so much time and effort to get the tool to basically work, that I've used up all my enthusiasm/gumption and have none left for the project I had in mind. But much of this is because of current traffic spike, my ignorance of css, and minor polishing/bugs that can be fixed in vers 1.1 - as I said, I really like the idea and UI.
But a deeper question: why a service, instead of a library? It's cross-language, but has an extra dependency (the service), an extra network jump, processing from many users convening at one point. It's interesting to me, because the world seems to be moving towards services, and this would logically include components that formerly would be libraries. Will this happen? What are the pros and cons? Will Amazon etc provide free computation for users of open-source components, analogous to open-source libraries? Interesting.
After playing around with it some more, it's working. Question, are you going to introduce REGEX or any other rules or even some helper functions to further process the API? That would allow us to drill down even further. It is really great for bootstrapping and getting some live data quickly. Kudos!
This might be a stupid question and perhaps I didn't look hard enough on your website, but is this open source? I didn't see a GitHub link anywhere. I'm specifically curious as to how you routed Noko or whatever scraping library you're using to do its thing.
* Ajax responses are often JSON content. How do I apply a CSS/XPath selector on it?
* I guess #! URLs could be transformed into _escaped_fragment_ URLs [1]
* I know twitter has an API. It was just an example. Maybe this example [2] would be more relevant (content could also be fetched with an _escaped_fragment_ URL).
You need to specify a unique index for the attribute - you specified it as "tweet", but not how to scrape it. The error messages are not informative at this stage; it might be something else too.
I tried to fix your e.g. but couldn't get it to work (I tried //span@data-time (xpath) - what is the unique index of a tweet?)
Did anyone ever make an API that could read a user's upvoted/saved articles from HN? It would require some kind of login credentials, as the data is not public.
Simply record the HTTP requests happening when you access that page. You probably have to pass the Cookie along through the header for the credentials to work.
But it's just HTTP, so it's basically already an API.
We went as far as building a browser-based IDE-like environment for generating these, and a language called parsley for expressing the scrapes. If you're interested in this, you could check out some of our related open source libraries:
Edit: I just open-sourced the scraping wiki site we created here: https://github.com/fizx/parselets_com
http://selectorgadget.com
https://github.com/fizx/parsley
https://github.com/fizx/parsley-ruby
https://github.com/fizx/pyparsley
https://github.com/fizx/csvget