Hacker News new | past | comments | ask | show | jobs | submit login
XHTML 2 Working Group Expected to Stop Work End of 2009 (w3.org)
19 points by daleharvey on July 2, 2009 | hide | past | favorite | 11 comments



completely ignoring the side issues, I think its a shame that we will apparently never be able to use the thousands of xml tools on html sites.

goes off to write a html parser for my favourite language(its not ruby or python)


XHTML 2 is being dropped in favor of XHTML 5, which is being developed in sync with HTML 5. XHTML itself will remain as (un-)popular as always.


You know I just don't understand why some people are so against XHTML.


Anybody can vomit text into a file, rename it ".html", and have it rendered to something by a web browser. When they try the same thing with XHTML, they receive an error page. Even otherwise reasonable programmers, who would not expect invalid code to be parsed by a compiler, blithely contribute to the spread of invalid HTML. For example, the front page of news.yc fails with 143 errors:

http://validator.w3.org/check?uri=http%3A%2F%2Fnews.ycombina...

A second factor is that IE does not support XHTML. Any features that rely on XHTML support (inline SVG, MathML, custom attributes, &c) are unavailable in a "portable" application. IE 6 is still a very large chunk of the market, especially among non-technical users, so relying on any features it doesn't support is iffy from a business perspective.


This may shed some light: http://hixie.ch/advocacy/xhtml

Truth is, most people, including some vocal advocates of XHTML, don't serve it with the proper content-type.


Perhaps because many have come to realize that it is a pipe dream that browsers would fully support it.


Because it imposes arbitrary additional rules that make errors fatal, in a Web where errors haven't been fatal in the past.


html5 has xml serialization if you need that.


The nice thing is, now that HTML5 is being specified, parsers can more easily written that will be able to correctly parse the billions of existing Web pages, which aren't in XML anyway.


Or you could run HTML through Tidy to get XHTML, then run your XML tools against that. Personally, I just use lxml and BeautifulSoup.


Ha, I do exactly this for converting these generated html files into pdfs using itext/flyingsaucer




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: