Hacker Newsnew | past | comments | ask | show | jobs | submit | nafey's commentslogin

This looks interesting. What I don't understand is: how was it implemented without a server relay. I am no expert in WebRTC (or P2P for that matter) but I always assumed that there needs to be a central location for users to exchange their addresses and only then a P2P connection can be established. This must be the case here as well right? Or am I mistaken?


That is exactly the case, there is a server in the background responsible for maintaining sessions and setting up WebRTC connections(ICE handshakes), when all clients are connected - messages are sent p2p


I am working on a standalone CLI tool that can do analytics find it here: https://github.com/nafey/minimalytics

Motivation for the project:

> This project was born out of the need for a lightweight analytics tool to track internal services on a resource-constrained VPS. Most SaaS analytics products either lack the scalability or exceed their free tier limits when tracking millions of events per month. Minimalytics addresses this gap by offering a minimalist, high-performance solution for resource-constrained environments.

I recently did a Show HN which you can find on my profile.


Thank you! Copying and processing the sqlite is always an option but I agree something like CSV exports are a good feature to add.


Wow. It looks like the two of us were responding to the same underlying needs. Great to see your project.


This is an interesting use case. I can definitely see how a colored cell in a grid can serve as a visual indicator to monitor health of infrastructure. Thanks for sharing this.


Thank you! I already had experience working in React so I decided to use it for the first release.

I am thinking about switching to HTMX to further reduce size, simplify and improve performance.


Thank you that's a good suggestion. I'll do that.


Just adding a static website with screenshots of the product will be a great upgrade for users who are interested to learn more before signing in.


They want more users, not more utilization. Utilization is the effect of users not its cause. They are happy to have utilization increase as long as they keep getting more users. If the utilization increases while number of users remain constant because people (like OP) are using CPU intensive programs then that is not in their interest.


I have always wondered why breaking the timestamp (in UTC) and the timezone into two separate data points and storing both is not the accepted solution. It appears like the cleanest solution to me.


You can't accurately convert future local times to UTC timestamps yet, as that conversion changes when timezones change.

Let's say we schedule a meeting for next year at 15:00 local time in SF. You store that as an UTC timestamp of 2025-08-24 22:00 and America/Los_Angeles timezone. Now, imagine California decides to abolish daylight savings time and stay on UTC-8 next year. Our meeting time, agreed upon in local time, is now actually for 23:00 UTC.


Wow thanks for sharing this, this certainly is a use case not covered by the system I proposed. I imagine this will require versioning of the timezone so we can translate our America/Los_Angeles v1.0 timestamp to America/Los_Angeles v2.0.


Two different implementations might make two different local times out of that, e.g. due to not being aware of changing DST/timezone policies. Hence the recommendation of separating between user/clock/calendar time (which must be exact to a human) and absolute/relative timestamps (which must be exact to a computer).


From my experience, it certainly is. Easy to tell time in sequence as well as local time then. When daylight savings hits you can still calculate well, and can quickly get real time for hours worked out drive time for freight across time zones to follow the hours of service rules.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: