When talking about checks on the order of twice a minute, curl is probably the right approach. You can/should still do a full check, but that can be done at a lower frequency.
Not sure why you're getting downvoted, personally I agree.
For example:
- a frequent/simple check dealing directly (on the internal network) with the webserver ("does it work well yes/no, what's the raw response time, etc..."). Here is where I would definitely use "curl".
- another less frequent test involving as well the DNS and the external network.
- another end-to-end test (e.g. once every 10 minutes?) involving as well one or more real browsers (this would test as well for example revoked SSL certs).
=> all these infos/metrics should be quite helpful to identify problems, or at least to shrink the potential area that is causing it.
Yep, for sure. Fastmail still uses a once-every-10-minutes frequency for the full end-to-end "can log into the website, compose an email, receive the email, trigger an automated background fetch, receive that email too" tests, though the "can connect to service" tests run much more often.
To be fair, my page load checks aren't just is the site up and responding 200 but also page performance anomaly detection and stuff. So I want/need to see the performance minutely to be able to detectly in a quickish amount of time if the performance has degraded for the entire pageload. You could have a minute where it's slow and that is ok but if you have 5 minutes of it being slower than normal one after another then you have an issue. I feel if you're doing checks every few minutes your data won't be as good as doing it every 30 seconds. It is way more resources but honestly, I think it's the future of monitoring. Also, having multiple types of ways of calling a site via http to monitor it is way more complex.
The main offer is for order monitoring but I am in the middle of creating a just page load monitoring offer for others since I think that service by itself is super useful.