Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Syslog doesn't solve any of the problems outlined above - you're just trading a file for a unix domain socket. And it introduces it's own problems, such as sharing that socket with the containers (if you just mount /dev/log into the container, you're gonna have a bad time when the syslog daemon restarts). Log rotation also becomes a problem at higher volumes. Want a 5 minute rotation? logrotate isn't of much help.

re: logging in protobuf/thrift. Thrift is more well suited towards it due to it's stream friendliness. With protobufs, you can do better performance wise than json, but does tend to be a bit more annoying because you need to write a length header between messages. (at this point, I've considered logging directly into a `.tar` file with one file per request. I haven't ever done it, and wouldn't in production, but it's an intriguing thought to me.) With either format, you have to deal with the fact that they're now binary logs, and you lose out on the ability to use a number of tools like grep & awk, or you have some program to cat the log as plain text.

Logging to json, while it's a bit more overhead, offers access to a lot of tooling which is lost with binary logging or unstructured logging. Athena, snowflake, sqlite, kafka/ksql, jq, it's incredibly ELK friendly...The list goes on. In general, it's usable without the message definitions, so it can be much easier to use.

All of these tradeoffs wildly depend on what it is that you're logging and what your needs are.



You can have syslog emit json




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: