Feed readers are my learning project, I use it to learn new languages. I've built and rebuilt readers in vbscript, vb.net, c#, php and python. php and python have been the easiest since they have good parser libraries. Also I've used SQL Server, MySQL, SQLite and just JSON flat files. I think I've built something like 10 or so variations. In the last few I've expanded to not only pull from RSS and included Hacker News, Twitter and an enhanced pull for Reddit feeds. Though I'm not pulling Twitter currently because of some API changes that I've haven't bothered to spend time on.
Helpful hint if you need favicons for your reader you can use Google.
I use it to grab and store sizes 16,32,48,64 of the icons with a monthly update ping.
My current iteration is built in python with a mysql backend. It's setup in a river of news style with an everything river and one for each feed and I generate topic bundles also. The feed engine is running every 15 minutes grabbing 40 feeds at a time but the static site generator is only running every 6 hours to keep me from spending all my time reading news. Since I pull in Reddit feeds I found that it's great for feed discovery.
I missed getting everything from RSS. A couple years ago I built my own engine that could bring in twitter, reddit, here and a few other services that didn't have RSS feeds and published the results to river of news style flow with a few topic specific tributaries. Haven't used it in years but have been thinking about restarting it.
I'm neutral on SQL but very big on the right tool for the job. My rule of thumb is, if i'm querying data just to massage it and then stuff it back into the database then I first try to do that as a stored procedure. I remember learning about windowing in SQL late into a project and after redoing some python functions as stored procedures really improving performance.