NIFL-ASSESSMENT 2005: [NIFL-ASSESSMENT:864] RE: more from the UK

Archived Content Disclaimer

This page contains archived content from a LINCS email discussion list that closed in 2012. This content is not updated as part of LINCS’ ongoing website maintenance, and hyperlinks may be broken.

From: Marie Cora (marie.cora@hotspurpartners.com)
Date: Thu Jan 20 2005 - 06:05:41 EST


Return-Path: <nifl-assessment@literacy.nifl.gov>
Received: from literacy (localhost [127.0.0.1]) by literacy.nifl.gov (8.10.2/8.10.2) with SMTP id j0KB5fn20928; Thu, 20 Jan 2005 06:05:41 -0500 (EST)
Date: Thu, 20 Jan 2005 06:05:41 -0500 (EST)
Message-Id: <00b101c4fee0$ebe2df60$0502a8c0@frodo>
Errors-To: listowner@literacy.nifl.gov
Reply-To: nifl-assessment@literacy.nifl.gov
Originator: nifl-assessment@literacy.nifl.gov
Sender: nifl-assessment@literacy.nifl.gov
Precedence: bulk
From: "Marie Cora" <marie.cora@hotspurpartners.com>
To: Multiple recipients of list <nifl-assessment@literacy.nifl.gov>
Subject: [NIFL-ASSESSMENT:864] RE: more from the UK
X-Listprocessor-Version: 6.0c -- ListProcessor by Anastasios Kotsikonas
Content-Transfer-Encoding: 8bit
X-Mailer: Microsoft Outlook, Build 10.0.2627
Content-Type: text/plain;
Status: O
Content-Length: 5216
Lines: 120

Hi again,

Karen you noted that:
"In some areas in this country, we go through standardistation
procedures where we discuss particular pieces of work and come to an
agreement about which level they should be put at. This is said to
increase inter-rater reliability."

Yes, this is THE way to ensure this type of reliability.  It takes some
time for practitioners to hone in on agreement in this way, but it is
the best way to ensure that everyone has the same understanding of level
1, level 2 and so on.  It also is a great practice for staff at a
program.  I know a few programs who have done this work and their
ability to work with students across levels and across parts of the
program improves.

As for assessment being cake once the standards are in place - I don’t
get that one, I have to admit.  Where would the assessment come from?
Who would make it and who would do it?  Where could the data go to be
examined so that the interpretation of the assessment is accurate?  How
can there be standardization across the assessments in such a case?
Perhaps some of these pieces are answered someplace, not sure.

What is your understanding of that position?  Are they just telling you
all to make up your own assessments that could speak to a standard or
goal?

Thanks
marie cora
Moderator, NIFL Assessment Discussion List, and 
Coordinator/Developer LINCS Assessment Special Collection at 
http://literacy.kent.edu/Midwest/assessment/


marie.cora@hotspurpartners.com


-----Original Message-----
From: nifl-assessment@nifl.gov [mailto:nifl-assessment@nifl.gov] On
Behalf Of HthKar@aol.com
Sent: Sunday, January 16, 2005 8:45 AM
To: Multiple recipients of list
Subject: [NIFL-ASSESSMENT:859] RE: [NIFL-ASSESSMENT:848] Re:  Skills
Discussion fro

Here's another interesting reference

http://www.nfer.ac.uk/research/papers/DiagnosticAssess.doc

I liked this as it was the first example I had seen of ACTUAL pieces of
work that somebody has assessed using a particular tool against the
standards.  

What is said to me initially was that if you are an ESOL student you get
a much higher grade than if you are not, because my understanding is
that some tutors are using a different rubric and working on the
assumption that if there are 'any' mistakes then you have to fail a
student at that level.  

So you can compare the rubric used in these worked examples, where for
each bullet point (see previous contributions of mine on these) a
student can be emerging, consolidating, established.  For the rubric
used in the assessment tool to which my previous email leads, one has
simply to pass or fail students on each bullet point; ANY mistake in the
- wait for it - 'skill' leads to a fail, and then an overall grade is
awarded on the basis of a mark out of something.  The IT ones seem to be
set up so that you type a number into the machine and it prints out a
pre-prepared statement. Probably this will tell you to go back and look
at the work to see which bullets the student failed on, since obviously
two students could obtain the same mark with a different profile of
passes or fails.

I recently had a long conversation with somebody who thought that once
you had a curriculum with a long list of bullet points in it assessment
against this was unproblematic. I was surprised how difficult I found it
to explain why this was not the case.  I took a deep breath and said,
well to put it very simply, some markers are stricter than others.  

In some areas in this country, we go through standardistation procedures
where we discuss particular pieces of work and come to an agreement
about which level they should be put at. This is said to increase
inter-rater reliability.

But, this new set of assessments is being produced in an educational
context which has believed that once you have specified standards
precisely, assessment against them is unproblematic.  But it's not.

I have recently done some informal cross checking between results on one
set of new tools and some others. It is possible for our new initial
assessment to say a lad is at a particular level one week and then the
lad to get a pass mark at a test three levels up a few weeks later.  It
is possible for a lad to seem to be awful at spelling given lists of
incorrect and correct spellings and asked to pick out the correct, but
when asked to write correct spellings a week or so later using the lists
from the diagnostics, as opposed to those from the initial, he may come
out several levels higher.  And sometimes he may correctly spell to
dictation words that he could not pick out from the confusing list
including incorrectly spelled words.  May be the lists of words confused
him.  You could say well in that case his spelling isn't really secure,
but which of the two tasks seems a more valid predictor of his chance of
spelling the word in any real life situation in which he might be !
required to do it??

And now some people are using lists of incorrect spellings and correct
spellings to teach spelling. ... this is an unforeseen piece of
washback/backwash.  

Regards

K

NB ... and no, I do not have simple perfect answers to all these
difficulties..... 


But you asked for some links, so now you have had two. 



This archive was generated by hypermail 2b30 : Mon Oct 31 2005 - 09:48:44 EST