Hacker News new | past | comments | ask | show | jobs | submit login

From what I recall of that era, it's the exact opposite: invalid elements (like <noscript>) are simply ignored, and their contents shown. That's how <noscript> works: newer browsers which understand JavaScript know the <noscript> element and ignore its contents; older browsers which do not understand JavaScript don't know the <noscript> element so its contents are shown. The same trick is used for <noframes>: browsers like Netscape which understand frames don't show the contents of that element, while other browsers which don't understand <frame> and <frameset> will show the content of the <noframes> element, so it can be used as a fallback.

Strict validation of HTML came later with XHTML, but AFAIK all browsers which understand XHTML also understand JavaScript.




XHTML came after HTML 4.0, so it had its advent around the rise of JavaScript's popularity.

However, HTML 2.0, which you can find specified here [0], and it specifies the "ignore" behaviour. Short of spinning up an old VM, I think I'll trust that my memory hasn't failed me.

From RFC 1866:

> markup in the form of a start-tag or end-tag, whose generic identifier is not declared is mapped to nothing during tokenization. Undeclared attributes are treated similarly...

> For example: > <div class=chapter><h1>foo</h1><p>...</div> > => <H1>,"foo",</H1>,<P>,"..."

[0] https://www.w3.org/MarkUp/html-spec/


If that was true, why did <script> tags have that horrible hack where the contents started with <!-- to hide from old browsers?


Note that in the example code I gave the enclosing tag vanished, but the internal tags did not. That's why.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: