Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, I don't. Can you help clarify it for me?

Search engines crawling millions of sites each with---on average---a few MB of data distributes cost globally.

Extracting terabytes of index data from a single search engine's repository consolidates the cost on the back of that repository's bandwidth provision.

These are not symmetrical cost structures.



Our git repository went down when crawlers decided to index it


But probably not Google. The google crawler is very careful and stops as soon as they encounter higher error rates. Bing appears to do the same.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: