Baby Steps with B.C.

I’ve criticised the way that SOCITM have handled accessibility in their better connected report a couple of times before.

But while it is still very important, I feel, to stress the problems with using WCAG 1.0 as a simple measure of accessibility, and I feel that myself and others have been right to criticise SOCITM’s approach (whether or not SOCITM agree with us may be a different matter!) it is also important, and I think fair to recognise that some changes have been made.

At the back end of 2005, Dan Champion asked me (and others) for our thoughts relating to automated testing of websites as part of a conversation he was having with SOCITM. Dan then pulled together our thoughts into a number of notes, queries and comments relating to the automated testing of websites, which he posted on both PSF and his excellent if sadly now infrequently updated blog Blether.

There’s a pithy saying, much loved by researchers and statisticians, that goes something like this:
“Be sure to measure what you value, because you will surely come to value what you measure.”

…The recent disparity between the ‘performance’ tests from SiteMorse and Site Confidence are a case in point.

Who can you trust? SiteMorse will tell you that their tests are a valid measure of a site’s performance. Site Confidence will tell you the same. Yet as previously reported on PSF the results from each vary wildly. SOCITM have offered this explanation for the variation:

“The reality is that both the SiteMorse and Site Confidence products test download speed in different ways and to a different depth. Neither is right or wrong, just different.”

And therein lies the real problem. If both are valid tests of site performance then neither is of any value without knowing precisely what is being tested, and how those tests are being conducted. The difficulty is that no-one is in a position to make a judgement about the validity of the tests, because no-one outside of the two companies knows the detail.

Blether: Between the Devil and the Deep Blue Sea

And I think it’s fair to say that SOCITM probably took a certain amount of this on board.

But with the production of the accessibility report for Better Connected 2007, there were still certain “issues”. A slightly skewed impression was given because Dan Champion, then working for a Local Authority, found that his Council site no longer reached the AA Conformance level because it had validation errors. This was not in dispute. The RNIB didn’t felt that this made the site ‘inaccessible’, however. Indeed, I agree with most of what one of the members of the RNIB team had to say at the time:

The problem here is the use of the blanket terms “accessible” and “inaccessible”. These imply a black and white situation - that a site is either completely accessible or completely inaccessible. That is rarely, if ever, the case. “Accessibility” is a sliding scale, with each site sitting somewhere on that scale. Taken as a whole, the checkpoints in the Web Content Accessibility Guidelines are a pretty good way of measuring where a site sits on that scale, and the defined levels (Single-A, Double-A and Triple-A) mark points on that scale.Donna from the RNIB on the WAC blog

The problem for me is that the Better Connected report then comes out with statements like “92% of Councils fail level A”, and when according to WCAG, failing level A means your content is impossible for some people to access, and when councils are ranked and scored against one another on this basis then frankly guidelines which as “pretty good” at measuring where a site sits on the scale just don’t cut it for this sort of ranking.

If they are only “pretty good”, we need something better.

One of the issues I had with the accessibility report for BC 2007 however, was that while the accessibility assessment was said to be measured against WCAG 1.0, the rules for WCAG 1.0 were changed to suit (admittedly in order to better fit the sites on Donna’s sliding scale), but you can’t say you’re using a particular set of rules if you then change the ones you don’t like…

In the final reported results, we don’t differentiate between sites which pass automated tests without needing to use these marginal allowances and sites which pass because of these allowances. However, when carrying out the manual assessments, we do note where a checkpoint has been passed because of the marginal allowance, and check that particular element for importance within the site, and the impact of any individual failures in the automated tests. If necessary, we may change the marginal pass to a fail as a result of that manual inspection.

Even the best sites have imperfections and occasional small lapses. This use of these “marginal allowances” is an attempt to accommodate that fact, and to maximise the likelihood that such a site, which might otherwise fail the automated testing phase, will undergo a more balanced, human inspection.

WAC blog: Multiple Accessibility Assessments

This methodology is perfectly sensible, given the problems with automated site testing, and the fact that as Donna says, even the best sites have imperfections, which doesn’t stop them being good sites. The only teensy problem is that as soon as you do this, you ain’t measuring against WCAG.

WCAG says quite clearly (my emphasis)

  • Conformance Level “A”: all Priority 1 checkpoints are satisfied;
  • Conformance Level “Double-A”: all Priority 1 and 2 checkpoints are satisfied;
  • Conformance Level “Triple-A”: all Priority 1, 2, and 3 checkpoints are satisfied;

WCAG 1:Conformance

That’s all of them, everywhere on the site, if you’re trying to measure conformance level of the site itself. You cannot allow a site that fails a single checkpoint of Priority level A on even one instance on one page to achieve the most basic conformance level, even if everything else is perfect. That’s what WCAG 1.0 tells us.

That’s why people who understand accessibility understand that hearing “92% of websites don’t achieve level A” is not the same as “92% of websites are inaccessible”. But that’s not the way people outside the field hear it, and that’s why people get so irate when statements about WCAG 1.0 are presented like this without a clear explanation of what that actually means.

Particularly when the implication is that Local Authority sites are doing poorly when in fact I was recently led to believe that for a ’sector’, UK Local Authority sites are amongst the most accessible sites in Europe…

This year, so far as I can tell, the RNIB’s methodology has been tweaked slightly, and they are measuring more strictly against WCAG 1.0. I’m not going to go into details, because the only bits I’ve seen have not yet been published, but I have been led to understand that sites have been failing WCAG 1.0 at Priority level A for only one or two examples of a priority one failure, unlike the ‘margin of error’ allotted the previous year.

And quite right too. Assuming that WCAG 1.0 conformance is all you care about. This is a more accurate measure of WCAG 1.0 conformance, being as strict as WCAG 1.0 mandates. It’s also less useful as a consequence.

Given that what is required to achieve level A conformance has changed, is it then any surprise that:

Level A accessibility across all sites has dropped to just 8% (14% in 2007)Better Connected Announcement

Well maybe that’s so, but you aren’t comparing like with like. 2007 allowed for a margin of error and was therefore a more realistic result in terms of “accessibility” but was a less accurate result against “WCAG Conformance”.

Decide which is more important to measure. You can’t have both.

I have a great deal of respect for the RNIB team, and I hope I’m not upsetting them too much, but what I’d like to see is the RNIB use their considerable influence and respect to strengthen the case that measuring WCAG 1.0 is not the same thing as measuring accessibility. As the RNIB said in 2007, it provides a framework for measuring accessibility, but it’s not the same thing. And I think the WCAG 1.0 framework in particular is starting to creak to the point where it is now of limited use…

…which brings me to this year. As with last year, there have been criticisms of the accessibility “measurement” aspect of the report which have maybe overshadowed some of the better things about the report. I’d like to think that by next year, instead of inching forwards with little baby steps, we could take some sort of giant leap forward towards some kind of assessment which everyone is in broad agreement with.

As I said earlier tonight on the Public Sector Forums noticeboard:

Could I suggest that in good time for next year, the assessment methods are debated in public, well in advance? That way we’ll know what the constraints are before we get started. That would give us the opportunity not to throw the baby out of the bathwater and keep many of the useful things within WCAG 1.0 (and 2.0 maybe) while not worrying about anything we perceive to be of less import?Me

…because there will be constraints. There will be an awful lot of sites that need to be assessed, and a comprehensive accessibility audit of each site would be too time-consuming. We therefore need a compromise — about both the methodology and the terms used — which is palatable to everyone.

For me, that is the challenge facing Better Connected. Can Better Connected itself become … well … better connected with the people it represents, and with real-world accessibility issues?


2 Responses to “Baby Steps with B.C.”

  1. Ann responds:

    I just want to correct a small misunderstanding with the methodology. The “margin of error” you talk about is still there, to a degree, but there are actually two things which could be construed as “margin of error”.

    Firstly, there are allowances built into the process for deciding who gets a full manual assessment and who doesn’t. That’s where the 50 errors for validation bit, for example, comes in. It’s a way of focusing our very limited time on the sites that are, if you will, closer to passing WCAG.

    Then, in the manual assessment, where there are a couple of errors, but we feel that these are of a fairly minor nature, we will mark the checkpoint as marginal rather than pass or fail, and for the purposes of Better Connected, these sites that have no fails, and some marginals are included in the passes. Something which was put in place to try and be reasonable, and pragmatic, and focus on what is really important - the impact on the end user.

    That’s what was done this year, last year, and the years before that.

    We go to enormous effort each year into making the work that SOCITM ask us to do as reasonable, pragmatic, consistent and useful as we can, and as I said on the PSF forum, that included, this year, giving an explanation of why it failed, where it failed, and often, how to resolve that failure, for each of the manual assessments we did. These sheets were sent to the web managers of each of the subscribing councils for the day of publication of the report. I’m honestly not sure how that’s less useful than last year. We also, in previous years, sent out details of failures and reasons for failure to those councils who asked for that information.

    The trouble with anything which requires a clear explanation is that those who don’t understand won’t pay attention to the “small print”, so it’s pretty unlikely that there will ever be a way to get across that x% of sites not meeting y guidelines doesn’t necessarily = x% of sites are completely inaccessible unless someone comes up with a way to present that in a pithy headline or less.

    We can but try, though, eh?

  2. JackP responds:

    We can but try, though, eh?

    Amen to that!

    ..and thanks for that extra bit of explanation there - and I don’t wish to come over as saying that WCAG is a load of rubbish, as that’s not what I’m saying either, merely that there are better things to do than to follow it slavishly.

    I guess the key problem is that people (e.g. the media) are generally looking to distill what is a very complex issue into a couple of soundbites, and that’s where the problems occur!


Leave your comments

Enter Your Details:




You may use the following markup in your comments:

<a href=""></a> <strong></strong> <em></em> <blockquote></blockquote>

Enter Your Comments:

|Top | Content|


  • Worn With Pride

    • Titan Internet Hosting
    • SeaBeast Theme Demo
    • Technorati
    • Guild of Accessible Web Designers
    • my Facebook profile

Blog Meta

|Top | FarBar|



Attention: This is the end of the usable page!
The images below are preloaded standbys only.
This is helpful to those with slower Internet connections.