Hey I want to address why this extension is different from other scrapers.
This is for ad hoc generation of EPub from websites that don't have scrape well using traditional scrapers (think standard request based command line scripts or some other chrome extensions that scrape based on open tabs/window) for some reasons:
1. Usually command line scrapers and other extensions have predefined sites they work for, this one's outside of those sites
2. Or they requires nontrivial configuration and/or code
3. Some sites use javascript to dynamically generates/retrieve the text, in which case you need the browser to run the JS - This was the biggest gap for me.
4. This one runs in the browser, so maybe less likely to be detected and blocked
I also don't intend this scraper to be robust or used in a repeated fashion as a background scheduled job, that's why there's a UI for selecting key elements for scraping. It's meant to be more generalized so that you don't have to have a configuration for a site to still be able to scrape it relatively easily with just some mouse clicks.
If the site you're scraping is already handled by the other programs/extensions, then this wouldn't perform better since the other ones are specifically configured for those sites. Otherwise, this extension gives you the tool to scrape something once or twice without spending too much time coding/configuring.
I don't find myself sticking to the same site a lot, so wrote this.
Having written my own one of these, the interesting thing about this one is really the UI for iterating on extracting content from an arbitrary site. Having a full GUI for working through the extraction is much more flexible than the norm.
If this can handle those sites where every section is behind an accordion that must be expanded (and especially where it collapses other sections when you expand one), then this is going to be awesome.
It extract the main content using Readability by default (you can configure it with something else). Logins would depend on how you're parsing. It has two modes, it either browses to the page inside the window (for non-refreshing pages), or retrieves it in the background using fetch.
Heh, I'm currently creating something very similar.
A web scraper for blogs and mainly web novels etc and ePub parser that persists the data to database along with categories and tags, and a companion PWA for offline reading to track reading progress on various stories and let me keep multiple versions of the same story (web novels and published epub).
Instead of Epub, it get catched down into text files (Gopher), Gemini files (Gemini) and HTML+images (Web Pages). You can visit the hier from ~/.cache/offpunk or directly from Offpunk.
With the "tour" function, forget about doomscrolling. You'll read all the articles in text mode sequentially until you finish down.
Fanfiction.net is trivial... apart from it having Cloudflare bot blocking turned up to aggressive levels. I've not seen an approach that works, other than using headless browsers to fetch the content.
The issue is most likely cloudflare blocking most the best scraping methods. If the site can be pulled down with eg. wget or curl without a bunch of options that you definitely aren't writing by hand, pandoc can just be used to directly make an epub.
E-Reader makers, take note. This is a cool feature that should be built in or at least able to be used with an API to get content onto the Kindle/etc. Or even a "send to Kindle" email address that can accept URLs too.
I wonder if this would have a positive or negative effect on profits.
On the one hand, they'd be adding a massive amount of free content to a platform where they make money because people pay to consume content.
On the other hand, it might actually increase sales simply because I'd spend more time using it, which would presumably result in more book purchases too.
(Also Kindle store is already full of $0 public domain stuff, so they already don't seem too bothered by that possibility.)
Huh didn't know that, guess I never assummed they would bother with it, I'd think about a published work in kindle like a product page in amazon therefore doesn't make sense to have 0$ items
Are they an amazon offer or do third parties take the time to set that up?
I'll jump on the bandwagon here to shamelessly plug my own little spin on Readability-based EPUB generator: It's a self-hosted OPDS server offering feeds of articles from HN, Tildes, and Pocket which are converted to EPUB on-the-fly (as soon as you try to fetch one). You can add/bookmark it in Koreader, which can run on most e-reader devices. It's simple to self-host (it's published as an image on Docker Hub and GHCR, or you can run it on Node directly).
My local instance just runs quietly on a Synology NAS; I like not having to interact with a computer to use it. Unlike the OP, it can't be used to compile many pages/URLs into a single EPUB, though.
For those interested in a simple to use command line tool that accomplishes the same I've had success with percollate - https://github.com/danburzo/percollate
If you can read it on a website, why not on an ebook reader?
If you start selling the resulting files, now that would be a copyright violation. German law has a right to create a "Privatkopie", i.e. a private copy. I guess this is similar to fair use in US law?
Before cell service was as widespread as it is today, there were programs that would scrape web pages into ePUBs so you could read them later on your Palm Pilot. I used it every day during my commute. And the best part was that they ended. No mind-numbing infinite scroll.
When I switched to a "smart" phone (SonyEricsson m600c), I really missed it.
This is for ad hoc generation of EPub from websites that don't have scrape well using traditional scrapers (think standard request based command line scripts or some other chrome extensions that scrape based on open tabs/window) for some reasons:
1. Usually command line scrapers and other extensions have predefined sites they work for, this one's outside of those sites
2. Or they requires nontrivial configuration and/or code
3. Some sites use javascript to dynamically generates/retrieve the text, in which case you need the browser to run the JS - This was the biggest gap for me.
4. This one runs in the browser, so maybe less likely to be detected and blocked
I also don't intend this scraper to be robust or used in a repeated fashion as a background scheduled job, that's why there's a UI for selecting key elements for scraping. It's meant to be more generalized so that you don't have to have a configuration for a site to still be able to scrape it relatively easily with just some mouse clicks.
If the site you're scraping is already handled by the other programs/extensions, then this wouldn't perform better since the other ones are specifically configured for those sites. Otherwise, this extension gives you the tool to scrape something once or twice without spending too much time coding/configuring.
I don't find myself sticking to the same site a lot, so wrote this.