That was a human-generated hallucination, my apologies. I always associated semantic web with something Google was pushing to assist with web crawling, and my first exposure to it was during the Web 2.0 era (early 2010s) as HTML5 was seeing adoption, and I always associated it with Google trying to enhance the web as the application platform of the future.
W3C of course deserves credit for their hard work on this standard.
My main point was that regardless of the semantic "standard", nothing prevented us from putting everything in a generic div, so complaining that everyone's just "not on board" isn't a useful lament.
Google acquired Metaweb Technologies in 2010, acquiring Freebase with it. Freebase was a semantic web knowledge base and this became deeply integrated into Google's search technology. They did, in fact, want to push semantic web attributes to make the web more indexable, even though they originated neither the bigger idea nor the original implementation.
(one of my classmates ended up as an engineer at Metaweb, then Google)
"I always associated semantic web with something Google was pushing to assist with web crawling, and my first exposure to it was during the Web 2.0 era (early 2010s) as HTML5 was seeing adoption, and I always associated it with Google trying to enhance the web as the application platform of the future."
This sounds more like "indexing" than "crawling"
The "Sitemaps 0.84" protocol , e.g., sitemap.xml, was another standard that _was_ introduced by Google
Helpful for crawling and other things
(I convert sitemap.xml to rss; I also use it to download multiple pages in a single TCP connection)
Not every site includes a sitemap, some do not even publish a robots.txt
W3C of course deserves credit for their hard work on this standard.
My main point was that regardless of the semantic "standard", nothing prevented us from putting everything in a generic div, so complaining that everyone's just "not on board" isn't a useful lament.