Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the biggest distinction is between archiving platforms made primarily for authors and primarily for web crawlers.

If you're an author (say, of a court decision) and you archive example.com/foo, Perma makes a fresh copy of example.com/foo as its own wacz file, with a CPU-intensive headless browser, gives it a unique short URL, and puts it in a folder tree for you. So you get a higher quality capture than most crawls can afford, including a screenshot and pdf; you get a URL that's easy to cite in print; you can find your copy later; you get "temporal integrity" (it's not possible for replays to pull in assets from other crawls, which can result in frankenstein playbacks); and you can independently respond to things like DMCA takedowns. It's all tuned to offer a great experience for that author.

IA is primarily tuned for preserving everything regardless of whether the author cared to preserve it or not, through massive web crawls. Which is often the better strategy -- most authors don't care as much as judges about the longterm integrity of their citations.

This is what I'm getting at about the specific benefits of having multiple archives. It's not just redundancy, it's that you can do better for different users that way.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: