NIFL-ASSESSMENT 2005: [NIFL-ASSESSMENT:1149] FW: responding to

Archived Content Disclaimer

This page contains archived content from a LINCS email discussion list that closed in 2012. This content is not updated as part of LINCS’ ongoing website maintenance, and hyperlinks may be broken.

From: Marie Cora (marie.cora@hotspurpartners.com)
Date: Fri Jun 24 2005 - 12:29:31 EDT


Return-Path: <nifl-assessment@literacy.nifl.gov>
Received: from literacy (localhost [127.0.0.1]) by literacy.nifl.gov (8.10.2/8.10.2) with SMTP id j5OGTVG11590; Fri, 24 Jun 2005 12:29:31 -0400 (EDT)
Date: Fri, 24 Jun 2005 12:29:31 -0400 (EDT)
Message-Id: <01a401c578db$05372c40$0202a8c0@frodo>
Errors-To: listowner@literacy.nifl.gov
Reply-To: nifl-assessment@literacy.nifl.gov
Originator: nifl-assessment@literacy.nifl.gov
Sender: nifl-assessment@literacy.nifl.gov
Precedence: bulk
From: "Marie Cora" <marie.cora@hotspurpartners.com>
To: Multiple recipients of list <nifl-assessment@literacy.nifl.gov>
Subject: [NIFL-ASSESSMENT:1149] FW: responding to the questions posted on Thursday
X-Listprocessor-Version: 6.0c -- ListProcessor by Anastasios Kotsikonas
X-Mailer: Microsoft Outlook, Build 10.0.2627
Content-Transfer-Encoding: 7bit
Content-Type: text/plain;
Status: O
Content-Length: 11504
Lines: 190

Dear List Members:  the following is a response for our guest Judy
Koenig.
marie
 
 
Hello again,
In this message, I'll try to address the questions that Marie and others
posted on Thursday.  
First, a quick response to Tom Sticht's request in which he asked if
anyone from the NRC's Committee or the NRC would be willing to comment
on the recently released ALL report.  Unfortunately, it would be very
difficult for me or anyone on the Committee to comment on the ALL
report.  The Committee's charge was to make recommendations about
performance levels for NALS and NAAL, not to review and make suggestions
for ALL.  In order to come up with the recommendations for NALS and
NAAL, the Committee undertook an intensive review of the assessment, its
purposes and design, its uses, and the other factors, as detailed in the
report. The Committee did not do a similar sort of intensive review of
ALL, so it would really be inappropriate for them (or me) to make any
comments about ALL.  There may be a few messages in the report that
could be applied to ALL, however, as the Committee tried to make
suggestions that would be broadly useful to those making decisions about
assessments.  For instance, the Committee emphasizes that the
conclusions drawn about test results should be justifiable, given the
procedures used for developing the assessment and the intent of the
assessment.  The Committee also tried to highlight that there are a
number of judgment calls involved with designing tests and setting
standards.  While there are professional guidelines that layout best
practices, many of the decisions about test development  and standard
setting require making judgments, and reasonable people might disagree
about what are the right choices. 
Next to address the questions that Marie posed. 
Marie's Question 1: 
Please define "Demand-Side Analysis"; give us an example, perhaps one
that compares demand-side analysis to other types of analyses.  What
sort of analysis was utilized in determining the original results of the
NALS and the NAAL?  Do you know why?
Response to Question 1: 
One way to think about this is to think about the way that licensing and
certification tests are developed (e.g., medical licensing exams).  The
development process for such tests first involves something called a
"job analysis" to determine the knowledge, skills, and compentencies a
licensed or certified professional should know and be able to do.  This
is not an easy task and involves surveys, discussions with stakeholders
and professionals working in the particular profession, and judgments
about what licensed professionals "should" know to function well (and
safely).  Often the process involves convening a variety of expert
panels to make these sorts of judgments. Once there's been consensus on
what a licensed or certified professional should know and be able to do,
test questions are developed that measure these skills, and the rest of
the test development process (field testing question, item analyses,
etc.) can take place.  Standard setting (the process of determining the
score required to "pass" the exam; that is, the "cut score") is based on
the judgements made by the expert panels.  
This process is quite different from the one that was used to develop
NALS and NAAL.  While NALS/NAAL did involve a variety of expert panels
(such as, the literacy definition panels), the panels were not asked to
make judgments about what adults should know and be able to do in order
to function in society.  (This is explicitly stated in the documentation
about NALS.)  Such a process would have been a vast undertaking.  What
the committee was suggesting by a "demand side analysis" is that a
process like this be undertaken but on a somewhat smaller scale than
what might be implied by "functioning in society."  NALS and NAAL use
stimulus materials and questions drawn from 6 context areas.  The
committee suggested that surveys and discussions (like the job analyses
procedures described above) be undertaken that would delineate the types
of demands that are put upon adults to function in each of the context
areas.  The Committee proposed that this sort of analysis be done first,
and the assessments designed to measure the skills adults need to have
to meet these demands.  The committee recognized that this was no easy
task, but still thought that this approach to test development would
provide the basis for the sorts of inferences policy makers and the
public wanted to make about NALS and NAAL results. 
Marie's Question 2:
The way the test was developed does not support standards-based
inferences being made about the data.  However, when the data was
reported, inferences were in fact based on a some set of standards.   Is
it possible to re-interpret the data based on a different set of
analyses?  Do you feel that the results would be very different if they
were?  Would this be a useful exercise or not?
Response to Question 2:
The answer to this question is partly addressed in the above response
about demand-side analyses. What we meant by "standards-based
inferences," is that NALS and NAAL were not designed to provide
information about what adults should know or need to know.  When the
assessments were developed, no expert panel was involved in deciding
what is "adequate" literacy and what is "inadequate" literacy to
function in life.  Therefore, it isn't appropriate to draw conclusions
like this about the test results.   The Committee selected performance
level categories that were intended to be descriptive of performance; in
doing so, they do make judgments about what level of performance adults
need in order to be classified into the levels (e.g., what should adults
be able to do in order to be classified as "basic" or as
"intermediate.").  However, they did not make any recommendations about
which of the levels would constitute adequate or inadequate levels of
literacy to function well in life (e.g., Is basic literacy enough? Is
intermediate Literacy enough?).  This is a fine, but important,
distinction.    
Marie's Question 3: 
The NALS and NAAL are examples of assessments that try to get at
evaluating program needs. So how do the results relate to the
individuals that took the tests?  Can interpretations be made of the
individuals then, if the assessments were designed to examine the larger
program needs?  And if literacy is a collaborative process in many
instances, and many programs structure the students in groups
(groupwork), wouldn't the validity of testing individuals be
problematic?  What is the relationship between testing the individual
and inferring that programs who serve some particular population might
be adequate or not?

Response to Question 3: 
NALS and NAAL were designed to provide information about groups of
individuals, in the same way that NAEP does.  That is, the assessment
doesn't report scores for  individuals (although NALS and NAAL have
statistically derived scores for individuals that can be used for
research purposes), but they do report scores for groups, such as gender
or ethnic groups.  The assessments also weren't designed to evaluate
programs.  For instance, an assessment designed to evaluate a program
would need to select a sample of adults from that program (e.g., adults
who participate in ABE).  That's not what NALS and NAAL did.   NALS and
NAAL were household surveys administrered to a stratefied random sample
of adults.  Instead, NALS and NAAL results can be used to evaluate
program needs.  That is, the results can be used to estimate the need
for ABE programs, for health literacy programs, or the like.  
Marie's Question 4: 
Why was the health section done differently than the other sections?
Wouldn't that affect the validity and reliability of the rest of the
test? 
Response to Question 4: 
The inclusion of additional health literacy questions should not affect
the reliabiltiy and validity of the scores; that is, the meaning of the
2003 scores should not change, and the inclusion of such questions
should not affect the comparability of results from 1992 and 2003.  To
explain: NALS included some questions that were based on stimulus
materials drawn from health and safety contexts.  The questions assessed
the same sorts of skills as other questions on the assessment, and the
questions contributed to scores in the Prose, Document, and Quantitative
areas.  For NAAL, additional questions were developed that were based on
stimulus materials from health and safety contexts, and enough of these
questions were developed that a "health literacy" score can be reported.
However, the questions still measure the same skills as measured by
questions that use other stimulus materials; and responses to these
questions contribute to the Prose, Doc, and Quant scores.  
>From Andrea Wilder:
Andrea's Question 1: First, I am interested in knowing how you came up
with the definition of ltieracy in the first paragraph of the Executive
summary.  Are these "common sense" definitions, or are you citing
conclusions from other work?
Response:  The Committee did not intend this to be a "definition" of
literacy but simply some examples of the variety of ways in which
literacy skills are used.  We wrote this paragraph to alert readers
(particularly readers who have not thought about literacy in the way
that you and others on this listserve have) to the importance of
literacy skills.  The ideas are the committee's based on their own
experiences and the materials that they reviewed. 

Andrea's question 2: Also, it looks to me, going over the NAAL levels,
TABE levels, NRS levels, there is an attempt to make these congruent;  I
hope I have been reading accurately.  Is this true? I read on ES-6 that
it was not possible to establish a one-to-one correspondence between the
NRS and  NAAL levels, but that there was a "rough parallel,"  I think
this is what I am getting at.  If there were this correspondence, then
the literacy descripton (NAAL) would key into one of the major
proficiency tests, and then the reporting/accountability requirements.

Response: You are reading the text as we intended.  We tried to develop
performance levels that would be useful to a variety of audiences, but
particularly to adult educators who most address the requirements of
NRS.  Given the scope of what is assessed by NALS/NAAL (e.g., the test
frameworks, specifications), it wasn't possible to completely align
NALS/NAAL levels with NRS levels, but we did the best that we could.
And, we provided the mapping from one to the other on page ES-6 to
assist with this. 
Andrea's Question 3: Did the Committee discuss a Spanish language track
for literacy? In our state, Massachusetts, there is a need for many more
ESL classes for immigrants;  I have also heard (have not verified) that
ESL is a health risk factor.  I understand that our national immigrant
policies are chaotic, and I wonder how this chaos may influence
decisions made for descriptive literacy assessment, specifically the
NAAL.
Response: The committee discussed the issue of immigrants and
non-English speakers at length.  While we did not call specifically for
assessments in languages other than English, we did provide a number of
suggestions for ways to collect information about the literacy levels
(in English and in the native language) of non-English speakers.  See
pages 6-3 to 6-5 and pages 7-8 to 7-10 for these discussions.  
These were great questions, and I hope the answers are helpful. 
Judy Koenig
 



This archive was generated by hypermail 2b30 : Mon Oct 31 2005 - 09:48:50 EST