I'm a website 'publisher' for a non-profit that has zero advertising on our site. Our entire purpose for collecting analytics is to make the site work better for our users. Really. Folks like us may not be in the majority but it's worth keeping in mind that "analytics = ad revenue optimization" is over-generalizing.
Of course analytics from 13 years ago doesn't help us optimize page load times. But it is extremely useful to notice that content that has gotten deep use steadily for a decade suddenly doesn't. Then you know to take a closer look at the specific content. Perhaps you see that the external resource that it depended on went offline and so you can fix it. Or perhaps you realize that you need to reprioritize navigation features on the site so that folks can better find the stuff they are digging for which should no longer include that resource. We have users that engage over decades and content use patterns that play out over years (not months). And understanding those things informs changes we make to our site that make it better for users. Perhaps this is outside your world of experience, but that doesn't mean it isn't true. And we also gather data to help optimize page load times.....
Can you give some examples of changes that you made specifically to make the site work better for users, and how those were guided by analytics? I usually just do user interviews because building analytics feels like summoning a compliance nightmare for little actual impact.
We generally combine what we learn from interviews/usability testing with what we can learn from analytics. Analytics often highlights use patterns that are of a 'we can definitely see that users are doing 'x' but we don't understand why' genre. Then we can craft testing/interviews that help us understand the why. So that's analytics helping us target our interviews/user testing. It also works the other way. User testing indicates users will more often get to where they need to be with design a versus design b. But user testing is always contrived: users are in an "I'm being tested mode" not a "I'm actually using the internet for my own purposes" mode. So it's hard to be sure they'll act the same way in vivo. With analytics you can look for users making the specific move your testing indicated they would. If they do great. But if not you know your user testing missed something or was otherwise off base.
I've decided to either stop working or keep working on some things based on the fact that I did or didn't get any traffic for it. I've become aware some pages were linked on Hacker News, Lobsters, or other sites, and reading the discussion I've been able to improve some things in the article.
And also just knowing some people read what you write is nice. There is nothing wrong with having some validation (as long as you don't obsess over it) and it's a basic human need.
This is just for a blog; for a product knowing "how many people actually use this?" is useful. I suspect that for some things the number is literally 0, but it can be hard to know for sure.
User interviews are great, but it's time-consuming to do well and especially for small teams this is not always doable. It's also hard to catch things that are useful for just a small fraction of your users. i.e. "it's useful for 5%" means you need to do a lot of user interviews (and hope they don't forget to mention it!)