Because Google can only index what they can see. As noted in the article, the Googlebot only gets to see the first few paragraphs of WSJ articles, so those pages are less likely to rank on searches.
Indexing more of the content (which would be possible by providing the full content to web crawlers) seems to violate Google's cloaking guideline as well: https://support.google.com/webmasters/answer/66355