1. Skip Navigation |
  2. What's New |
  3. Search |
  4. Contact Me |
  5. This Site
18 June 2006   1:07 AM

In case you're not aware, WCAG 2.0 doesn't insist that web pages should be valid. It instead insist that they must be able to be parsed unambigously. Basically, this means that your nesting of elements should be correct and you shouldn't have overlapping elements. The only publicly stated reason that I could in anyway see that relates to the dropping of validity is the desire to make WCAG 2.0 technologically neutral. It does not, and should not contradict technological neutrality to say that technologies must be used according to specification.

I have heard rumour, yapping like a barking dog, assert that validity has been pushed to the back of the queue because large companies and organisations don't necessarily want to have to insure that their web development tools produce valid code. This may well be complete nonsense. I certainly don't make any claims for it's validity.

Okay, why should sites be valid?

Because things should be done according to the standards laid out for them.

I'll rephrase that: what difference does it make to accessibility if things aren't valid?

We don't know. And that's precisely the problem. At any given time, all the currently available web browsers may be able to support invalid markup. We cannot guarantee that this will always remain the case, and yet we do know that any web browsers produced according to their standards would support valid markup. Using valid markup is therefore a reasonable and way to ensure that the principle "Content should be robust enough to work with current and future user agents (including assistive technologies)" is adhered to. To make this technologically neutral, simply state that Web units or authored components are produced to the documented standard for that technology.

Netscape 4 added a layer element that was not part of the W3C standard which caused problems in some cases when you encountered a layer element with a browser that did not support it.There was also the problem that Netscape and Internet Explorer each supported their own way of marking up shortened terms - either <acronym> or <abbr> but not both. If there is no requirements for documents to be valid, there is no reason for user agents further down the line not to add new elements that only they support.

Browser wars come at the expense of accessibility. Back accessibility, back standards, back validity. Send a comment to the W3C telling them we want validity back in.

Steve Pugh [20 June, 2006] 

The not valid but must parse unambigously sounds like something from the X-crowd. For a random piece of XML it may or may not be possible to validate but it must be well formed.

If this is the case, then why WCAG couldn't just say "technologies must meet requirements laid down in the relevant specifications" is a mystery. Then HTML would need to validate but random piece of XML would merely need to be well formed. So probably the conspiracy theory is right.

I think you're wrong about abbr and acronym being a Netscape vs IE thing. Acronym was in the HTML 3.0 draft, was supported by IE from v4, has two contradictory definitions in HTML 4, and is dropped by XHTML 2. A right mess? Oh yes.

Abbr appears fully formed and unambiguous in HTML 4 but wasn't supported by anything until Gecko and Opera 4 in 2000 (which also supported acronym from the same time).

Netscape 4 didn't support either so unless it was a Netscape rep on the W3C HTML WG who proposed abbr I don't see where Netscape come into the issue.

Jack [21 June, 2006] 

The abbr/acronym thing was one of those things I "read somewhere" and then never came across again. So you could well be right about that being a load of tosh. But it doesn't change the complete hash in terms of support for them...

Aaron [03 July, 2006] 

I feel more informed now ! Glad I read your blog !

Post a Comment

 

This page is powered by Blogger