NIFL-ASSESSMENT 2005: [NIFL-ASSESSMENT:879] RE: more from the UK

Archived Content Disclaimer

This page contains archived content from a LINCS email discussion list that closed in 2012. This content is not updated as part of LINCS’ ongoing website maintenance, and hyperlinks may be broken.

From: HthKar@aol.com
Date: Thu Jan 20 2005 - 13:25:46 EST


Return-Path: <nifl-assessment@literacy.nifl.gov>
Received: from literacy (localhost [127.0.0.1]) by literacy.nifl.gov (8.10.2/8.10.2) with SMTP id j0KIPjn27649; Thu, 20 Jan 2005 13:25:46 -0500 (EST)
Date: Thu, 20 Jan 2005 13:25:46 -0500 (EST)
Message-Id: <1A6CC7F6.148791A8.0004C68E@aol.com>
Errors-To: listowner@literacy.nifl.gov
Reply-To: nifl-assessment@literacy.nifl.gov
Originator: nifl-assessment@literacy.nifl.gov
Sender: nifl-assessment@literacy.nifl.gov
Precedence: bulk
From: HthKar@aol.com
To: Multiple recipients of list <nifl-assessment@literacy.nifl.gov>
Subject: [NIFL-ASSESSMENT:879] RE: more from the UK
X-Listprocessor-Version: 6.0c -- ListProcessor by Anastasios Kotsikonas
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset=iso-8859-1
X-Mailer: Atlas Mailer 2.0
Status: O
Content-Length: 2801
Lines: 20

I am sorry: I forget that irony (or sarcasm - if irony is a rapier then I am doomed forever to bludgeons) doesn't carry over the net.  I can see loads of reasons (of various kinds) why 'transparent standards' don't work, even in situations where the funding of the assessor does not depend upon the outcomes of that assessment.  However, this belief, or theory, has underpinned much vocational assessment in the UK, and  spread from there into other curricular areas.  The thing began with 'functional analyses' of workplace roles within an industry.  The assessment schedules were based around these functions, and were expressed in terms of 'units' and 'elements'.  Additional descriptors would then be categorised as 'performance indicators', 'performance criteria', 'evidence requirements' or 'range statements'. So if you were doing counselling your range statement might read (to paraphrase one such schedule I once read) depressed client; manic client; etc.   

Successive government funded reports expressed concern that the standards were not being applied uniformly, with the result that more and more detail was inserted into the specifications.  One academic claimed and I have no reason to doubt this that in the whole time allocated for teaching a course there was insufficient time to carry out all of the assessments that were required.  Because they were working with a 'mastery' approach, every single 'bit' of the specification was supposed to be assessed.  Also, there was no 'weighting', since it was all or nothing, pass fail, one was not allowed to let strengths in one area balance weaknesses in the other, or, if one was using a 'marking scheme'- to decide pass fail on the basis of 'averages'.  

There is in my belief widespread confusion now about the purposes of much classroom activity: one can read of people embarking on the collection of summative assessment evidence the minute they have a classful of students.  So then people started saying all this was unmanageable and then we began to introduce multiple choice testing, which had the additional appeal of seeming to be objective, which, given the outcomes related funding system, had additional attractions.  

The availability of advanced IT made all this even more attractive, of course, since assessments could be 'online', which seems almost to be regarded unconditionally as 'a good thing.' 

However, we got back to 'sampling' and so on. So you can read that our multiple choice summative assessment tests 'sample' the skills required for writing, which they do on the whole by presenting learners with incorrectly written texts and inviting them to spot which line the error is on, etc.

Here is a link to some of these:

http://www.aqa.org.uk/qual/keyskills/qp-ms/AQA-Com-1-W-QP-Mar04.pdf

Regards
K



This archive was generated by hypermail 2b30 : Mon Oct 31 2005 - 09:48:45 EST