NIFL-ASSESSMENT 2005: [NIFL-ASSESSMENT:1191] RE: high-stakes te

Archived Content Disclaimer

This page contains archived content from a LINCS email discussion list that closed in 2012. This content is not updated as part of LINCS’ ongoing website maintenance, and hyperlinks may be broken.

From: Katrina Hinson (khinson@future-gate.com)
Date: Mon Aug 01 2005 - 15:54:04 EDT


Return-Path: <nifl-assessment@literacy.nifl.gov>
Received: from literacy (localhost [127.0.0.1]) by literacy.nifl.gov (8.10.2/8.10.2) with SMTP id j71Js4G27641; Mon, 1 Aug 2005 15:54:04 -0400 (EDT)
Date: Mon, 1 Aug 2005 15:54:04 -0400 (EDT)
Message-Id: <42EE1AB8020000A00000044E@smtp.us.future-gate.com smtp.de.future-gate.com>
Errors-To: listowner@literacy.nifl.gov
Reply-To: nifl-assessment@literacy.nifl.gov
Originator: nifl-assessment@literacy.nifl.gov
Sender: nifl-assessment@literacy.nifl.gov
Precedence: bulk
From: "Katrina Hinson" <khinson@future-gate.com>
To: Multiple recipients of list <nifl-assessment@literacy.nifl.gov>
Subject: [NIFL-ASSESSMENT:1191] RE: high-stakes testing, state/federal
X-Listprocessor-Version: 6.0c -- ListProcessor by Anastasios Kotsikonas
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset=US-ASCII
X-Mailer: Novell GroupWise Internet Agent 7.0 
Status: O
Content-Length: 11837
Lines: 215

Marie, You asked :
Perhaps we should shift our questions away from the tests themselves.
Perhaps we should discuss what we want to measure, and then make some
suggestions and have discussion on how might be best to capture the
stuff we want to measure.  We keep getting stuck in this quagmire of
misinterpretation of terms.  

 I really want to see/measure how my students think - the thought process they use to get from one point to another - to achieve the answer they come up with. Ultimately how they think is going to determine how easily they learn and progress. Do they over think - do they underthink. Do they see things through completely or do they stop to soon. (For a lot of multiple choice questions, stopping too soon, the answer is there and wrong, vs if you "complete" the work and get the correct answer.) 

I want to know if they really do know the basics - add, subtract, multiply and divide and not just with simple numbers but with LARGE numbers. I usually find when I do my own assessments that students can work with a 2 digit number but struggle when a third is added.  I want to know how easily my students recognize patterns, be it shapes, numbers, letters etc.

  I want to see how well they communicate both verbally and written - 

I want to see/hear how well they read - not just what they can glean/comprehend from a short passage - but the rate at which they read, the fluency  they read at and what kind of words they are stumbling over. Are they misunderstanding passages because the material is content related, such as a passage on "mitosis" versus being more generalized or life related like reading a newspaper article. 

 Older Students can become so focused on the GLE, in my case, with the TABE,  especially if it's really low, that they grow frustrated and often comment  that they'll "never get it".  My younger students often experience the same frustration for different reasons - they come in knowing they completed 10th grade or 11th grade etc, yet test much much lower than that and then when their scores are discussed with them during student/teacher interviews they express the disappointment they feel and question if it's even worth the effort to try.  Or they test really well yet when given material that's designed to correlate to their GLE/Intake assessment, they struggle and don't understand the material, grow frustrated, disheartened and in the worst case scenario, they quit attending class.  Sometimes that initial assessment seems to set the stage for failure rather than success and it would seem to me that the focus should be succeeding - that there needs to be a better way to promote the positive and not the negative.   


On a different note, I'd like more information on BEST Plus and REEP. Where would I find that?

Katrina


>>> marie.cora@hotspurpartners.com 08/01/05 2:26 PM >>>

MIME-Version: 1.0

Hi Katrina, thanks for your post.

A couple of things to consider:  You noted below in your post that
'standardized tests are necessary because of funding'.  Actually,
standardized tests are necessary for fairness.  The funding part is
quite frankly secondary - although no one would argue with your
frustrations regarding **that use of them**, myself included.  I'm just
trying to get you (and all) to see these differences and be careful to
understand how the many pieces of accountability work together, or don't
work together.

And both theoretically and in reality, any type of test (including
surveys, interviews, and portfolios) can be standardized, and in the
best of all worlds, should definitely be standardized.  (The challenges
for these latter assessments are steep:  costly, time-consuming, huge
amounts of paper/documentation, etc.)

If you check out Phil Cackley's post, he discusses two performance-based
assessments (Best Plus and REEP) that are standardized, that provide
much more usable information for the student and teacher, and that are,
low and behold, approved for use with the nrs.  Not perfect...nothing is
with all this...but moving toward a more effective space.

Perhaps we should shift our questions away from the tests themselves.
Perhaps we should discuss what we want to measure, and then make some
suggestions and have discussion on how might be best to capture the
stuff we want to measure.  We keep getting stuck in this quagmire of
misinterpretation of terms.  

What do others think?

marie cora
Moderator, NIFL Assessment Discussion List, and 
Coordinator/Developer LINCS Assessment Special Collection at
http://literacy.kent.edu/Midwest/assessment/ 

marie.cora@hotspurpartners.com 




-----Original Message-----
From: nifl-assessment@nifl.gov [mailto:nifl-assessment@nifl.gov] On
Behalf Of Katrina Hinson
Sent: Monday, August 01, 2005 10:13 AM
To: Multiple recipients of list
Subject: [NIFL-ASSESSMENT:1184] RE: high-stakes testing, state/federal

I've been really quiet on this list for the last several weeks - partly
because we just welcomed a brand new baby to our family - now that I've
caught up on all the collected emails, I think I'll dive into this
discussion.  A colleague and I were actually discussing "standardized"
testing issues over coffee this past Saturday as it relates to our own
program.

To answer the questions posed by Howard:

I don't like standardized tests. I never have -even as a student in
school myself.  I think they are excellent guage of a student's ability
to memorize and regurgiate information but not necessarily a good guague
of a student's ability to APPLY the knowledge they have. I also think
one of the fatal flaws with a standardized tests is that sometimes
students learn something simply to pass  a test but then forget it as
soon as they think they don't need it any longer. Unfortunately, because
of reporting and funding, I think standardized tests, irregardless of
which one a state or school uses, have become a necessary evil.  I
happen to agree with others that spoke up on the list that stated that
they don't really think standardized tests are the best way to go in
terms of assessing students. Like others, my own school does intake
testing before assigning a student to a class.  One of the problems I've
found is that some students don't take the test seriously, they get
really low!
 scores, are improperly placed, and then  they quit coming b/c they get
bored. For the record, we use the TABE test.  I've seen students test
who simply opened their test booklet and just bubbled in answers - yet
when doing work in class, it was discovered that they knew way more than
the test showed. Likewise, I've had students test really high, and it
not be an accurate indication of what they really knew. I've had
students, especially in the math portion of the test, score at the 11th
and 12 grade level yet those same students could not work with
complicated fraction problems, had trouble with long division, etc,let
alone the inability to do algebra and geometry.  The TABE, along with
any standardized tests, is going to have inherent flaws - because it
uses snippets of data to "test" a student's knowledge base but it
doesn't come close to giving a real and sometimes completely accurate
picture.  On a side note, I also agree with earlier comments that the
TABE is not neces!
sarily an ideal test to "assess" a student's reading ability.  In my t
levels, as a GED instructor and even as an AHS instructor, reading
ability is truly only assessed when an instructor spends some quality
one on one time with his or her students gauging everything from fluency
to  comprehension. The TABE, CASAS and even the GED definitely tests
comprehension skills but give a weak assessment of the students' fluency
skills. It can be assumed that if the student has trouble comprehending
what they have read, then by defaulty they have trouble with fluency -
but it doesn't begin to tell or help an instructor know just where that
problem might lie. Is it with word recognition, phonetics, rate, etc.
There are a lot of questions that no standardized tests can ever answer
and that the instructor is going to have to "assess" on his or her own.

My experience with CASAS is that it too doesn't give a complete picture
BUT, I do like the fact that it is "Life Skills/Employability Skills"
based. I think it's much easier to explain to someone in their 50's and
60's in terms of CASAS, than it is to have given them the TABE and show
tell them that they are at a 4th grade level in a given area. I agree
that such explanations are a bit demeaning to adults who have life
experiences that the TABE does not take into account.  There is a huge
difference between the 17 year old who completed 10th grade and the 50
year old who held a job for 20 years before the plant closed and those
differences are NOT Assessed or accounted for in assessments.

Howard asked if there was one tests that was "better than sliced bread".
I think the answer to that is "no." No one tests will ever give a
complete picture. I think that is also the fatal flaw in the NRS. It's
data driven only and data is one sided.  Data like that can be skewed
b/c not everyone tests well; data can be misleading - students tests
high or low and it not be the real "indication" of their ability;
students deliberately "blow" the tests b/c they don't understand or
appreciate the significance of it. There are a lot of factors, it seems
to me, that make "standardized" testing flawed but  because of funding
issues, they are necessary. I think it  becomes equally necessary then
for instructors to go beyond the "initial" assessment done at an intake
session to truly identify the needs and abilities of their students. I
think this can be done with one to one interviews, surveys and teacher
made materials.  I think that as a student enters and learns, that
portfolios !
of work highlighting their growth are the best assessment of their
ability. 

I don't think there is an easy answer or solution.

Regards
Katrina Hinson


>>> hdooley@riral.org 07/27/05 10:21 PM >>>

MIME-Version: 1.0

"Help", he says, not quite desperately.  (I have procrastinated, so I am
just a "nonce" from desperation.)

As my program (staff and learners) and fellow practitioners move into
the 21st century of "no adult left behind", trying to meet the
accountability requirements of federal, state, and program parties,
trying to be evidence-based, standards-based, and so on in the jargon of
the moment, we are as you are trying to prepare our learners for
post-secondary training/education and for living-wage jobs, and, well,
frankly (as St Paul said) trying to be "all things to all people so that
some few can be saved".

In that context, I am interested in hearing and/or discussing with folks
the implementation of standardized assessments.  Are they always a
necessary evil?  The devil's due?  Have you found ways to make them
relevant, engaging?

Perhaps (whisper, wink) you are you a true-believer?  Is the TABE, the
BEST, the CASAS, the best thing since sliced bread?

Don't be shy.  Blast me.  Guide me.  Lurkers, come out and play.
Theorists, practicivists welcome to proselytize. 

Do you reject standardization?  Are you are a naturalist?  Please, let
me know how to move down the "path not taken."

If your comments are "not ready for prime-time", you can reply privately
to hdooley@riral.org.  Thank you.

Howard L. Dooley, Jr.
Director of Accountability, Project RIRAL
Assessment Team, Governor's Taskforce on Adult Literacy







We could learn a lot from crayons: some are sharp, some are
pretty, some are dull, some have weird names, and all are
different colors...but they all have to learn to live in
the same box.


We could learn a lot from crayons: some are sharp, some are
pretty, some are dull, some have weird names, and all are
different colors...but they all have to learn to live in
the same box.



This archive was generated by hypermail 2b30 : Mon Oct 31 2005 - 09:48:51 EST