Quite an interesting product. I typed a competitor's name in and immediately saw a huge list of ads they were running. Very helpful in just ten seconds!
I get how they can crawl ads. But how do they figure out the most effective ad? The only thing I can think of is that the most effective ad is the most frequent one. But that isn't necessarily true.
There are a half dozen other mass ad crawling services selling the same service (being able to search and analyze your competitors' ads), dating back at least several years.
On one hand you'd think if this wasn't permissible, they'd be big targets and Google would go after them legally. On the other hand, what would the legal basis for blocking this be?
Someone browsing my website and seeing a Google ad has not agreed to any Google terms of service. It's a tough argument that a browser is allowed to do this but a crawler isn't, especially when Google's own crawlers now include javascript-executing webkit browsers.
Out of curiosity, how can they enforce that sort of thing? Blocking ip's is the only thing that comes to the top of my head, but some tricky javascript might be able to keep google from even knowing that they are being crawled at all. How could google even find the people doing it? (Besides, you know, reading this article.)
If you start doing automated requests to their search engine, very soon they start serving captchas. They detect it using machine learning on features like number of requests coming from an IP, etc.