This page contains archived content from a LINCS email discussion list that closed in 2012. This content is not updated as part of LINCS’ ongoing website maintenance, and hyperlinks may be broken.
Return-Path: <firstname.lastname@example.org> Received: from literacy (localhost [127.0.0.1]) by literacy.nifl.gov (8.10.2/8.10.2) with SMTP id j72HrwG06424; Tue, 2 Aug 2005 13:53:59 -0400 (EDT) Date: Tue, 2 Aug 2005 13:53:59 -0400 (EDT) Message-Id: <002d01c5978c$0c5d8bc0$0602a8c0@frodo> Errors-To: email@example.com Reply-To: firstname.lastname@example.org Originator: email@example.com Sender: firstname.lastname@example.org Precedence: bulk From: "Marie Cora" <email@example.com> To: Multiple recipients of list <firstname.lastname@example.org> Subject: [NIFL-ASSESSMENT:1197] RE: high-stakes testing, state/federal X-Listprocessor-Version: 6.0c -- ListProcessor by Anastasios Kotsikonas X-Mailer: Microsoft Outlook, Build 10.0.2627 Content-Transfer-Encoding: 7bit Content-Type: text/plain; Status: O Content-Length: 9609 Lines: 183 Hi Howard and everyone, Howard, thank you for your thoughtful post. I would really like to hear from you all out there on your thoughts, suggestions, comments on Howard's post below. It must get you thinking - share your thoughts with us. I just want to clarify a couple of things: Howard, you said: I was struck by Marie's use of the word "fairness." I'm not sure I agree; I would say "comparability." I think that's what "those people" who want -- or mandate -- we use standardized assessments really want. The purpose of a standardized test is in fact to provide a level playing field - to try and be fair to all who take the test (sorry: broken record!). Fairness and comparability are two completely different things: one is about the purpose (to try and be fair); the other is about what the test is being used for (to compare students or scores or programs or whatever). These are fundamentally different notions, Howard, and I believe you are mixing them up. It may be true that "those people want/mandate we use standardized tests for reasons of comparability" - but that is completely different from the fact that a test was developed by psychometric methodology to try and capture a body of knowledge from a bunch of people without bias toward any one of those people. (I'm not saying standardized tests are perfect in their fairness regard either: I'm just trying to impress that this is the point of the standardization process, and really only that. Try to separate that out in your mind.) And if you do not administer a test exactly as it is prescribed to administer (which is an **extremely** important part of testing), then you have removed the fairness aspect (the standardization) and hence, any results will not be usable - you will NOT be able to compare students, or scores, or PROGRESS within a student accurately or with any confidence whatsoever. Throw out the standardized administration process and throw out any comparing as well. Also, you said: "Some assessing is transitory, highly personal, unique to this learner and that instructor; how would that be standardized? Isn't it, by its nature, unstandardizable?" Perhaps. Perhaps some of that is actually a monitoring of who that person is, what his needs and goals are, how he interacts with certain materials or people, what challenges and successes you as the teacher identify with him as you work with him over time. All extremely important stuff to log and keep track of because it does build a more complete picture of that person. But couldn't you 'standardize' some of the pieces surrounding some of these activities? For example, perhaps the materials or activities used are developed/selected from a set of standards based on your curriculum (or the students' goals); and most important, I would think that you would want to make sure that when interacting with each student, your processes for working on tasks or materials is pretty much the same as for each other student. Not EQUAL, I don't mean equal. A simplistic example: if you want to check on a person's ability to write a note to a child's teacher, do you let one person write that note at home (where they could get help) but another must do it in the confines of the classroom? That's not fair. I recently saw a very long list of activities that ESOL students in a high school had to do in order to 'graduate' out of that class (there were like 30 choices). It was a required final project. There were no guidelines, timeframes, or performance standards. The list included: Become an ROTC member Start a class newsletter Write a letter to a friend in English Talk to three strangers on the street and report your experience (didn't say if that report was to be oral or written) Would you say that any of these activities and/or their results could be compared in any meaningful way? Of course not - but the fundamental problem with this final project rests with the fact that none of this is fair to begin with. The teacher may have tried hard to encompass a wide variety so that all her students had something that they were interested in/could relate to, but because she was using the activity for a high stakes purpose, it makes whatever results very unfair. Ok, I've gone on plenty. Somebody else talk now. marie -----Original Message----- From: email@example.com [mailto:firstname.lastname@example.org] On Behalf Of Howard Dooley Sent: Monday, August 01, 2005 9:54 PM To: Multiple recipients of list Subject: [NIFL-ASSESSMENT:1192] RE: high-stakes testing, state/federal I really appreciate the discussion, and the varied experiences and points of view. I hope more of us will join in; I'm certainly learning from your thoughts. Marie's recent comment echoes a discussion going on in RI about this same topic. I was struck by Marie's use of the word "fairness." I'm not sure I agree; I would say "comparability." I think that's what "those people" who want -- or mandate -- we use standardized assessments really want. And, of course, I have faith (faith is belief in things unseen!) that they want those comparisons to be fair. A second point: I'm not sure that in a perfect world every assessment would be standardized. Some assessing is transitory, highly personal, unique to this learner and that instructor; how would that be standardized? Isn't it, by its nature, unstandardizable? Back to the point Marie is making. I agree that some of the dissatisfaction I have read in the discussion seems to me to stem from wanting or expecting the assessment to do or be things that it's not supposed to do or be. Not all assessment initiates from the learner or the learning situation. Particularly with standardized assessment, the assessment is usually initiated from funders or policy agencies, and it reflects what they want to know, and what they value. They are, as it were, the unseen partner in the room and in the learning situation. It may be that the assessment does not align completely, or it isn't encompassed completely, by the learning that is agreed to between every instructor and his student (or, would be happening in the absence of such an assessment). However, that doesn't mean the assessment is unqualified-ly inappropriate, inaccurate, intrusive, non-relevant, and so on. It is what it is for what it needs to accomplish. And I think that it is valid. Nothing more, but nothing less either. Just as a policy person may look at portfolios, videotapes, or anecdotes and reject them as inappropriate, non-relevant, and so on, for her purposes, instructors often do the same for standardized and even program mandated assessments that aren't generated from within a specific learning situation. The assessment identify a few items or limited skills of value to that other person. It becomes just a few items or skills to be included in the more comprehensive learning situation. And so I see the need for us, as professionals, to make changes to our learning situations, and to recognize, value and imbed the information which a standardized test provides. We have to value it, or our learners cannot. Are we saying that the limited comprehension skills assessed have no place in the learners' acquisition of higher reading functions. Yes, they are not the totality of reading, but no relevance? It seems to me that we have to imbed it, just as we would any other assessment that we do value -- decontextualized workbook, authentic, portfolio, performance. At the program level, one way this can be done is to make the standardized pre-testing part (again, one part; not the whole) of the diagnostic phase of the learner's experience -- using the assessment to set goals, for targeted instruction, or to develop specific items in an IEP. Or, instructors may see how assessment areas are related to a core curriculum, and prepare learners for those areas and in the methods of the assessments to come. In either case, or in other ways, instructors and learners would need to be open to expanding their learning to include ideas, areas, and items, that the unseen partner in the learning process values. Digression: And let me say emphatically, that this is why my program absolutely does not use or discuss GLE's with our learners. I agree that they are meaningless for adults. If there is anyone out there who absolutely disagrees, and finds the GLE's appropriate and practical, I would like to hear the argument and the examples. Seriously. Someone mentioned the STAR project, which is based on the ARC study, and I have heard that GLE's play a significant role in that program. Maybe someone in that project can write in, and offer some insight to the value of GLE's in developing reading skills. I would also say that I don't see this as only a standardized test issue either. Whenever a policy decision is made, whether at the federal, state, program, or class level, then assessment will be initiated from outside the learner-instructor interaction. For example, at sites where technology is available, RIRAL instructors are required to use that technology as a method with their learners. Learners do not, in general, get to opt out on the basis of not seeing the relevance. So, learners prepare some written work using a word processor, because familiarity with technology has been identified as an important, life-long learning skill. And so, we assess how well learners progress in this area, even though it's not part of the GED test or the ESOL beginning learners' stated goals. Sorry for the length. Howard Dooley
This archive was generated by hypermail 2b30 : Mon Oct 31 2005 - 09:48:52 EST