Hacker News new | past | comments | ask | show | jobs | submit login

Suppose you track all client-side errors. Some portion of your users have a blocker that bans an analytics tool you use. Every hit generates an error. You could disable the error if you wanted in that case, but why not keep it online and use the graph on sentry to track how often it's happening?



One really big thing I learned from doing this on a public website: there are a ton of ISPs, browser extensions, antivirus and malware which inject really horrible JavaScript into every page. If you run Sentry as a front-end error collector you will get a ton of bizarre-seeming errors for other peoples’ code and, if you're really lucky, find that some of them used the same variable or function names as your code.


You can set it up so that Sentry only tracks errors on a subset of pages. One approach I've seen is to have 100% tracking on your canary deployments or for a few minutes/hours after a deployment, and then use the tried and true `if (rand() < .2) { sentry.log(...); }` approach to limit the errors you see. It does mean that you may not receive certain critical errors, but if you are trying to stay under their billing tier thresholds or just trying to rate limit yourself, it's a reasonable trade-off (especially if you are logging 100% on Canaries or for a period after rollouts). You can also customize the error rate based on page so that, for example, you get 100% of errors on checkout but only 10% of errors on the homepage.


I was self-hosting so it was a question of seeing traffic rather than quotas. I'm loathe to sample errors since it often seems like the outliers are the most interesting but at some point it's expensive to avoid.


Is there a good way of preventing unauthorized javascript injections? Some type of onReady cleanup script that can identify what is supposed to be there and then strip out everything else?


HTTPS


Sadly, not completely. A surprising nonzero number of users have isps that insist on using their own root cert to inject this nonsense.


Or just local antivirus. There is a shocking amount of user level firewalls that break the web really badly.


Why would a standard browser trust that certificate?


It's installed by the ISP's setup program which they tell everyone is mandatory (otherwise they won't get the adware kickbacks) and the techs are discouraged from skipping.


How often that does even happen? Many people just get a router (w/modem) in the mail which they just connect to with their devices. No software is being installed.


100% of Comcast and Verizon installs in my experience. RCN offered but wasn't pushy and they seem to have stopped.


facepalm


Not really. That stops an ISP, but local malware, antivirus, etc can be browser extensions.


You can configure Sentry's JavaScript client library to only collect errors that originate from your script files (even rejecting inlined code).

I wrote more about this and other techniques for battling noise here: https://blog.sentry.io/2017/03/27/tips-for-reducing-javascri...


Yep — thank you for the hard work on that over the years. For me HTTPS + CSP was a good denoising traffic but it was still memorable for just how broken the web environment is, especially internationally (not just the Great Firewall, either – I should have saved what looked like a tracker being injected into requests from an Iranian university).


I believe sentry has support for reversible minification mapping. You could use that along with a PRNG (or maybe just magic number-based) minification to drive 'hard to conflict' js delivery. Unfortunately, I bet any script delivered that way looks a lot like malware. ;_;


Ah, okay. Just seems wasteful to send the whole stack trace, local vars, etc, for that. But, yes, I can see people doing what you're describing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: