Hacker News new | past | comments | ask | show | jobs | submit login
From One RSS Feed Entry to 186850 Hits (susam.net)
54 points by susam 7 months ago | hide | past | favorite | 23 comments



RSS isn’t dead and the indie web still exists. It’s just overshadowed by the gross annoying advertiser-owned corporate web that gets all the attention.

If everyone who complained that RSS is dead deleted their Facebook and Instagram and Gmail and Twitter, the problem would be solved in a week.


Deleting the services where my friends and communities are these days isn't going to bring RSS back to glory days.

Not everyone has to use RSS as long as services and blogs I'm interested in still offer RSS and new RSS apps are being developed and maintained (And there's a lot of great ones on all platforms!).


Not only that, almost all RSS feeds only feed you a crumb of content and expect you to load in their page for the full meal, which basically nullifies most of the benefit of centralizing your feeds in a single app / tab.

I understand why though: very very few people will directly pay for content, and inlining ads can’t track the engagement with said ads.


I use https://miniflux.app as my feed aggregator and it has a nice feature where you can toggle "Fetch original content" on a per feed basis. This will web-scrape the content automatically in the background and it'll show up in your feed reading apps just like if they'd have a full feed.


You're right, axing all your connections at once will leave you an island. But we can do better.

I'm hosting some metal in a cheap colo that runs a few VMs with a Matrix home server, Jitsi and NextCloud. Slowly but surely I gave up all the platforms mentioned in your parent and made it easy for people to join. I have a techie friend who helps me evangelise to our mutual friend group and we share the maintenance work and bills.

There are amazing online communities that build the software, and then more amazing communities that automate almost all the work away. Nowadays you can fire an Ansible script at a VPS and bob's your uncle. The technical work is mostly done; we're there, we can do (edit: okay almost) anything big tech can for very little money.

That was never the problem. The problem is the stickiness of the big platforms. They have the network effect going for them, their UX designers generally do a better job of making sure anyone can jump on almost instantly, and most of all, the big players have staying power. We can grow an organic internet. The tech is there and we can show our peers that a non-enshittified (un-enshittified?) world is possible. All we need to do now is stay in the game for the long haul.

Our little community is tiny and we don't need the growth curve SV has been chasing for the last two decades. We don't care about conquering the market. It just needs to be there. So if you host anything where you have the final say just offer the option and I'm pretty sure the people that care will form a community around it. It's working for us.


> The technical work is mostly done

But only for very technical users who want to invest the money, time and energy into running things themselves. Usually people also don't think what happens if from one day to the next they'd not be able to maintain it.

Personally I'd not want to be responsible for data my friends put into my selfhosted NextCloud instance.


>But only for very technical users who want to invest the money, time and energy into running things themselves. Usually people also don't think what happens if from one day to the next they'd not be able to maintain it.

All true. It is a burden sometimes, however, I am a sysadmin by trade and the scale of our personal operation doesn’t even register compared to what I do daily. It takes a bit of planning and continual maintenance to do this, but I feel we (as in myself, you and the HN audience) are in a unique position to make the world a little better this way. My employment has lost most of its meaning in KPI's, regulation paperwork and compliance. This gives a little meaning to what I do.

>Personally I'd not want to be responsible for data my friends put into my selfhosted NextCloud instance.

We have decent backups set up and if anyone asks I'm happy to go through a personal backup strategy. I see NextCloud as a syncing service and people can use it as such, with that expectation going into it. If you keep the scale small enough you can be clear about that on a personal level.


What is dead may never die! RSS is only resting. One of these days it will muscle up to those bars and voom.

I support you 100%. Could you, erm, point me towards this indy web?


Another one worth checking out: https://search.marginalia.nu/


Shameless plug: https://peopleandblogs.com

People are still posting on their own websites and the independent open web is still alive and doing fine.


I’m actually working on a project right now that is the perfect answer to this question.

In the meantime, start with this:

https://blogs.hn/



I don’t see contact info on your HN profile or GitHub profile. Please contact me, I am working on a similar project to the Internet Places Database and would like to collaborate.


aside from what others provided: https://kagi.com/smallweb


Speaking of RSS, and this feed specifically, it is the only one (of hundreds that I subscribe to) that periodically spams a set of old posts into my "fresh" feed. Some backend change that causes unique identifiers or something to change?

TT-RSS is my client.


Hi there! Sorry about the issue of old posts reappearing on your feed reader! This is an issue I observed too. The issues were caused due to redesign of my website which led the feed entries to be regenerated with new <guid> values. Perhaps I should generate these <guid> values from the post slugs or some such attribute of each post that does not change whenever I redesign my website.

By the way, I have no intention of disturbing the current design anymore, so this issue should not occur again.


It's a common enough problem that I put a dedupe (based on titles) feature in my own RSS reader Temboz (www.temboz.com).


How the author knows its website has hundreds of subscribers? AFAIK is not possible to identify subscribers to RSS feeds and counting hits won't help. Am I missing something here?


Hi! Some feed aggregators include the subscriber count in the User-Agent header, so I can pick these counts from the access logs and add them up. This is how the logs look:

  [14/Jun/2024:00:03:46 +0000] "GET /feed.xml HTTP/1.1" 304 0 "-" "Feedly/1.0 (+http://www.feedly.com/fetcher.html; 87 subscribers; )"
  [14/Jun/2024:00:06:29 +0000] "GET /feed.xml HTTP/1.1" 304 0 "-" "Feedly/1.0 (+http://www.feedly.com/fetcher.html; 34 subscribers; )"
  [14/Jun/2024:00:09:31 +0000] "GET /feed.xml HTTP/1.1" 304 0 "-" "Feedly/1.0 (+http://www.feedly.com/fetcher.html; 31 subscribers; )"
  [14/Jun/2024:00:16:36 +0000] "GET /feed.xml HTTP/1.1" 304 0 "-" "Feedbin feed-id:2815708 - 3 subscribers"
  [14/Jun/2024:00:29:42 +0000] "GET /feed.xml HTTP/1.1" 304 0 "-" "Feedbin feed-id:2330316 - 3 subscribers"
  [14/Jun/2024:00:40:58 +0000] "GET /feed.xml HTTP/1.1" 304 0 "-" "Feedbin feed-id:1714691 - 8 subscribers"
  [14/Jun/2024:01:21:01 +0000] "GET /feed.xml HTTP/1.1" 200 188077 "-" "Mozilla/5.0 (compatible; BazQux/2.4; +https://bazqux.com/fetcher; 5 subscribers)"
  [14/Jun/2024:01:44:21 +0000] "GET /feed.xml HTTP/1.1" 304 0 "https://susam.net/" "Inoreader/1.0 (+http://www.inoreader.com/feed-fetcher; 24 subscribers; )"
Picking a few days of logs where the subscriber count has not changed much, I get a rough estimate of the total count of subscribers reported by the feed readers like this:

  $ for i in 1 2 3 4 5; do echo $(head -n 1 access.log.$i | grep -o '../.../....') $(awk -F'"' '{print $6}' access.log.$i | sort -u | grep -o '[0-9]* subscribers' | awk '{s += $1} END {print s}'); done 
  13/Jun/2024 335
  12/Jun/2024 335
  11/Jun/2024 336
  10/Jun/2024 334
  09/Jun/2024 337
In case anyone is wondering why we see multiple entries for Feedly and Feedbin in the first log snippet, that's because in an older design of my website, I had multiple sections each serving its own feed at paths like /blog/feed.xml, /maze/feed.xml, etc. Later I consolidated all of them into a unified feed at /feed.xml. So the feed readers still hit the old feed URLs and then get redirected to the unified feed URL.


You can get a rough estimate based on unique IPs hitting the RSS feed. Moreover, some of the online feed readers report the number of subscribers of your feed as part of their User-Agent. An example from my blog logs: `"Feedbin feed-id:2688376 - 9 subscribers"`


Some sites like Feedly and others show the subscriber count in the User-Agent string.


Interesting, didn't know. Will check that!


In my own logs, the ones that show are Feedly, Inoreader, Newsblur, Feedbin, The Old Reader, and a few small/personal ones.

Of course, they only show the subscriber count for their own platform. And then you can also pool together all the separate requests fetching /feed/ and add it all up.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: