This page contains archived content from a LINCS email discussion list that closed in 2012. This content is not updated as part of LINCS’ ongoing website maintenance, and hyperlinks may be broken.
Return-Path: <firstname.lastname@example.org> Received: from literacy (localhost [127.0.0.1]) by literacy.nifl.gov (8.10.2/8.10.2) with SMTP id j721sXG27693; Mon, 1 Aug 2005 21:54:33 -0400 (EDT) Date: Mon, 1 Aug 2005 21:54:33 -0400 (EDT) Message-Id: <email@example.com> Errors-To: firstname.lastname@example.org Reply-To: email@example.com Originator: firstname.lastname@example.org Sender: email@example.com Precedence: bulk From: "Howard Dooley" <firstname.lastname@example.org> To: Multiple recipients of list <email@example.com> Subject: [NIFL-ASSESSMENT:1192] RE: high-stakes testing, state/federal X-Listprocessor-Version: 6.0c -- ListProcessor by Anastasios Kotsikonas X-Mailer: Microsoft Outlook, Build 10.0.2627 Content-Transfer-Encoding: 7bit Content-Type: text/plain; Status: O Content-Length: 5024 Lines: 87 I really appreciate the discussion, and the varied experiences and points of view. I hope more of us will join in; I'm certainly learning from your thoughts. Marie's recent comment echoes a discussion going on in RI about this same topic. I was struck by Marie's use of the word "fairness." I'm not sure I agree; I would say "comparability." I think that's what "those people" who want -- or mandate -- we use standardized assessments really want. And, of course, I have faith (faith is belief in things unseen!) that they want those comparisons to be fair. A second point: I'm not sure that in a perfect world every assessment would be standardized. Some assessing is transitory, highly personal, unique to this learner and that instructor; how would that be standardized? Isn't it, by its nature, unstandardizable? Back to the point Marie is making. I agree that some of the dissatisfaction I have read in the discussion seems to me to stem from wanting or expecting the assessment to do or be things that it's not supposed to do or be. Not all assessment initiates from the learner or the learning situation. Particularly with standardized assessment, the assessment is usually initiated from funders or policy agencies, and it reflects what they want to know, and what they value. They are, as it were, the unseen partner in the room and in the learning situation. It may be that the assessment does not align completely, or it isn't encompassed completely, by the learning that is agreed to between every instructor and his student (or, would be happening in the absence of such an assessment). However, that doesn't mean the assessment is unqualified-ly inappropriate, inaccurate, intrusive, non-relevant, and so on. It is what it is for what it needs to accomplish. And I think that it is valid. Nothing more, but nothing less either. Just as a policy person may look at portfolios, videotapes, or anecdotes and reject them as inappropriate, non-relevant, and so on, for her purposes, instructors often do the same for standardized and even program mandated assessments that aren't generated from within a specific learning situation. The assessment identify a few items or limited skills of value to that other person. It becomes just a few items or skills to be included in the more comprehensive learning situation. And so I see the need for us, as professionals, to make changes to our learning situations, and to recognize, value and imbed the information which a standardized test provides. We have to value it, or our learners cannot. Are we saying that the limited comprehension skills assessed have no place in the learners' acquisition of higher reading functions. Yes, they are not the totality of reading, but no relevance? It seems to me that we have to imbed it, just as we would any other assessment that we do value -- decontextualized workbook, authentic, portfolio, performance. At the program level, one way this can be done is to make the standardized pre-testing part (again, one part; not the whole) of the diagnostic phase of the learner's experience -- using the assessment to set goals, for targeted instruction, or to develop specific items in an IEP. Or, instructors may see how assessment areas are related to a core curriculum, and prepare learners for those areas and in the methods of the assessments to come. In either case, or in other ways, instructors and learners would need to be open to expanding their learning to include ideas, areas, and items, that the unseen partner in the learning process values. Digression: And let me say emphatically, that this is why my program absolutely does not use or discuss GLE's with our learners. I agree that they are meaningless for adults. If there is anyone out there who absolutely disagrees, and finds the GLE's appropriate and practical, I would like to hear the argument and the examples. Seriously. Someone mentioned the STAR project, which is based on the ARC study, and I have heard that GLE's play a significant role in that program. Maybe someone in that project can write in, and offer some insight to the value of GLE's in developing reading skills. I would also say that I don't see this as only a standardized test issue either. Whenever a policy decision is made, whether at the federal, state, program, or class level, then assessment will be initiated from outside the learner-instructor interaction. For example, at sites where technology is available, RIRAL instructors are required to use that technology as a method with their learners. Learners do not, in general, get to opt out on the basis of not seeing the relevance. So, learners prepare some written work using a word processor, because familiarity with technology has been identified as an important, life-long learning skill. And so, we assess how well learners progress in this area, even though it's not part of the GED test or the ESOL beginning learners' stated goals. Sorry for the length. Howard Dooley
This archive was generated by hypermail 2b30 : Mon Oct 31 2005 - 09:48:51 EST