Skip to main content

New Developments in Adult Literacy Assessment (Individual and Large-Scale) - Transcripts

New Developments in Adult Literacy Assessment (Individual and Large-Scale)
Discussion Transcripts

Guest discussion sponsored by ASRP with John Sabatini and John Strucker

Announcement | Guest Participant | Preparation | Resources
Literacy assessment for adults

Welcome

Hello everyone,

Welcome to our discussion this week with guest John Sabatini, Senior Research Scientist at the Educational Testing Service. Our topic is:

New Developments in Adult Literacy Assessment (Individual and Large-Scale)

Please read the complete information on this discussion, including guest bio and suggested preparations at:

http://lincs.ed.gov/lincs/discussions/assessment/10AssessDev

Please send your comments and questions to the List now!

Thank you!

Marie Cora

Assessment Discussion List Moderator




Limitations of our tests


Hi John Sabatini,

Thanks again for agreeing to kick off this special series of LINCS discussions sponsored by the Assessment Strategies and Reading Profiles website http://lincs.ed.gov/readingprofiles/index.htm! And, Marie Cora, thanks for moderating!

My question: It's been a common complaint as long as I've been in the field that the assessments available at the program level - both for documenting learners' progress and for identifying their strengths and needs in reading - are inadequate, possibly inaccurate, etc. Several research studies have also noted the limitations of the TABE and CASAS -
e.g., Daryl Mellard and colleagues found evidence that TABE and CASAS appear to be measuring different things and that neither correlates very well with the GED.

Based on your research, to what extent are our complaints justified? And, as we get further into the discussion, I hope you'll comment on what improvements we might hope to see.

Best,

John Strucker


Hi John Strucker,

My sense is that the measures are not as bad as the complaints would suggest, but not as good as they could be. Part of the problem is that most users want and expect more from assessments than they are designed to accomplish. Here's a list of a few points that are likely to engender more discussion:

  1. K12 reading assessments tend to correlate with each other in the .5 to .7 range, depending on sample and study. So, it is not adult literacy comprehension measures alone that are measuring slightly different things/constructs. In fact, TABE, CASAS, and GED are intended to measure different constructs and their item and task designs reflect
    these differences. Consequently, there are implications for what proficiencies learners need and what instruction would support those proficiencies.
  2. I'm not sure the field would be happy or accept a uniform, single reading comprehension construct, and our notions of what should go into a good test changes with time, so that would result in revised outcome tests as well. A significant problem arises from over-use of the same test forms. Measuring change/gain turns out to be a tricky technical enterprise, and it is especially sensitive to how well forms are equated
    and kept secure over time.
  3. With respect to component skills (e.g., decoding, word recognition, reading fluency, vocabulary, listening, etc.), it would be helpful if there were measures designed specifically for the adult literacy market. Our experience using a variety of measures that were designed with K12 and other audiences suggests that the psychometric properties tend to hold up, but the normative tables and percentile rankings require an
    appropriate adult sample. Unfortunately, the most expensive part of test development is typically the national sampling data collection and analysis. I'm not sure the demand adequately justifies creating new assessments rather than trying to repurpose existing ones, or using research or locally developed instruments.
  4. Perhaps the most important issue is: what claims do I want to make about the learner (s), and what decisions do the scores from tests justified. If the literacy program has as a goal maximizing students who pass the GED, then that should be the target outcome. The question of TABE or CASAS outcomes then shifts to what levels of performance on
    these tests a) predicts passing the GED, and/or b) predicts success in GED preparation classes.

Let me use these statements to kick start some comments, while I ponder what improvements we might expect in the future.

John Sabatini


Hi. My name is Mora Larson. I am a coordinator of a rural Virginia literacy program.

I have a question for Mr. Sabatini. In the adult education program in which I work,
the instructors are also the test givers (TABE), and test scorers. It seems that if a teacher's performance is tied to the students' scores, what legitimacy do these scores have? Isn't there a huge conflict of interest here? I am assuming of course not everyone is as honest as I am. I have, to give an example, received a new student whose previous test scores were unbelievably high. This man came to me not knowing his consonant sounds,
and yet he was, according to the last TABE, reading at a 4th grade level with the last teacher. It was impossible. We try to promote gains, for the sake of the student and our program, and our job; with these stakes so high wouldn't it make more sense for someone else to give the test? Just wondering what your thoughts are on that. Thanks.

Mora Larson

Reading Specialist

Charlotte Learning Center, VA


Dear Mr. Sabatini:

Thanks for giving us this opportunity to ask you questions. In addition to the points you've already made, I'd also like to know in light of the national emphasis on transition whether there is an instrument or method that is a meaningful predictor of successful entry into college/training. Can a standardized assessment system predict a person's ability to succeed in college? Or is that better measured by the actual outcome - for instance, a passing grade on a college entry-level math or writing course - and would there be a practical way to accomplish this?

KC Andrew

Washington State Board for Community and Technical Colleges

Adult Basic Education


Hello everyone,

I am as curious as KC about a good measurement for college readiness. When I made the transition from Adult Education to the Academic side of the college I found that the students from GED were as prepared for developmental education as a majority of the district's high school graduates. Our college population places 1/3 of the entering students into developmental education, whether the student has a high school diploma or GED. But, when the student arrives on the academic side, there is no interest by faculty or administration to assess a student's weaknesses or strengths as ABE programs do with the TABE. When I administer the TABE for assessment of knowledge, I find it very helpful in targeting instruction. Until alignment of P-16 takes place, we must teach who comes into the door. I find that placement/second assessment is more valuable than a single assessment, but rarely do the instructors use any of the assessment data given to them to align their instruction. Adult education does a much better job of targeted instruction, and then they send the students into an academic world that never looks at the students' abilities. This disconnect is more impacting on the student's life than the assessment tools used.

Lynda Webb


Greetings John and everyone,

My greatest issue with standardized tests is that they do not measure outcomes, but rather, at best, goal attainment. If the goal of a learner is to pass the GED, then the outcome is not whether or not the learner has done so, but rather a marker having or not having reached the goal.

Without attention being paid to life outcomes (e.g., what is a learner doing as a result of having passed the GED, information that relies on longitudinal tracking), the validity of standardized tests is limited at best, in my opinion. Aspects of validity, such as criterion-relatedness, carry little meaning within the confines of an educational intervention.

I would imagine that any claims you would wish to make about learners would go well beyond such confines.

Michael A. Gyori

Maui International Language School  


Greetings Everyone

This Group needs to link with transitions. Some of the same issues are coming up in that group also.

  1. I do not know if these issues can be resolved. As pointed out the assessments measure different parameters. On the transitions side they are looking for an indicator that measures the success at the current level and a predictor for ability to start on the next level. As Michael pointed out many assessments are a measure of goal attainment. I will add that many assessments are base on a skill level attainment based on a stated "norm" i.e. the GED exam is "norm referenced to the skill level of graduating high school seniors based on an accepted benchmarks and standards. This would be a k-12 measure and correlations have been done to try to predict post-secondary success. This brings us back to what are the constructs that we are going to measure?
  2. I agree that the field may not be ready to accept uniform single reading comprehension constructs but we cannot avoid the issue. I do think that we can agree on "core" concepts. These could then be added to as a state or region deem necessary.
  3. Question 4 is the most important and this may require us to dump this industrial educational model. Maybe we should look at getting rid of grades altogether and go to a competency based system? It seems to work with our alternative and adult population?

Jeff McNeal 

Administration/GED Testing

Education and Training Connection

 


I don't think the tests used in Adult Education (we can choose from TABE, CASAS, WorkKeys and soon GAIN) are meant to assess learner outcomes in the sense of GED, employment readiness, post-secondary, etc. They are one set of tools used to provide information on where the student is starting from and what he/she needs to learn. Now that "learning" may only be within the framework of the test, but it's a starting point and part of the job of an instructor is to be able to link test results with student goals.

You may get an indicator of preparedness for the GED based on the level of the test and the score on the test, but there's a whole lot more you want to know before you send that student to GED testing. If her goal is to be the manager in the department store, there may be a way to go if she's taking an E TABE.

Since I'm most used to the TABE, I want to know which level of that test the student took, which questions or types of questions he missed, and the score he got on the test. A student could get a scale score of over 600 on an E level TABE, placing him in High Adult Secondary Ed in the NRS, but no way is he ready either for material written at that level, or for GED testing.

As to "core" concepts – are you talking standards? States have been busy developing them – perhaps what is needed here is to look for the commonalities among these to develop agreed-upon concepts at the appropriate levels. I think the standards warehouse is a good place to go to see what states have and how the various benchmarks /indicators align.

Most important I think is to be sure that the test aligns with what you are teaching – not the other way around. If you're using competencies, or standards, the test should align to those as much as possible, not that they align to the test. There's nothing wrong with testing what is taught! Again, a component of the teacher's role is to develop those informal assessments that help him and the student see the progress the student is making. A test like TABE or CASAS can help to confirm that.

Miriam A. Kroeger

Arizona Department of Education/Adult Education


Miriam A. Kroeger wrote: 

I don't think the tests used in Adult Education (we can choose from TABE, CASAS, WorkKeys and soon GAIN) are meant to assess learner outcomes in the sense of GED, employment readiness, post-secondary, etc. They are one set of tools used to provide information on where the student is starting from and what he/she needs to learn. Now that "learning" may only be within the framework of the test, but it's a starting point and part of the job of an instructor is to be able to link test results with student goals.

This really is at the heart of this issue and IMHO ["in my humble opinion"] a major reason that many in the field are so frustrated.  

Everyone working in this field is committed to meeting the needs of our learners. CASAS assessments provide a snapshot of what the learner can and cannot do. CASAS assessments provide a wealth of information to aid in the instruction of our learners adults using the adult competencies upon which they were developed.

At the learning center in Davenport Iowa, other than our learners want to earn a GED to improve their employment outlook and/or pursue further education/training. There is a significant disconnect between their goals and the hoops we require them to hop through for our funding.

The variability of their motivation and willingness to do their best on a post-test assessment that is meaningless to them, but is our life is painfully disconcerting. Our former director had done a study of GED tests with CASAS assessments and provided for GED completion as a proxy post-test. Until this was taken away from us, we did not have as many issues with the NRS.

Since then, we have discovered that being held accountable for learner progress on an assessment that is at best tertiary to their goals when there are crystal clear goal attainment indicators (GED, enrollment in PSE/training, employment) is an unnecessarily painful, arbitrary measure especially in light of "performance based funding"

Adult literacy education is a vastly different world than K-12 education. The measurement of our success ought to be authentic to the goals of our learners and how are programs operate instead of this NCLB/K-12 approach that seemed okay back in 1998, but is proving to be very unwieldy and excruciating unjust to our programs.

We are committed to the education of adults who were left behind as children. I pray that reauthorization will allow our focus to return to this commitment instead of spending endless resources trying to meet these arbitrary, artificial accountability guidelines.

Jim Schneider

Asst. Dean, Career Assistance Center

Davenport, IA


Greetings all,

A discussion is currently underway on the Reading & Writing List about learner background

knowledge. I believe the discussion is intricately tied, ultimately, to assessment and assessment practices as well (and also to all other lists, so I hope pertinent pieces of this discussion are picked up by them).

Background knowledge is foundational. It mirrors our respective understandings and perceptions of "reality." Each individual's language use and comprehension, in turn, mirrors that knowledge.

If we truly tapped into our learners' background knowledge (whatever that might be), I would not be surprised if students made amazing strides in their educational development (including reading and writing literacy as well as native and second language or dialect development). In fact, my experience is that they would and do. Further, I believe that concurrent strides would be made in our approaches to and understandings of "learning disabilities," not to mention accompanying "teaching disabilities."

All efforts at assessment are half-hearted, at best, if they fail to tap into what is meaningful for the learner. If we, as educators, fail to "awaken" what carries meaning, efforts at truly student-centered instruction and assessment will fall by the wayside and account for attrition as well as the huge disconnect between learners, their classroom experiences, and our, I'm led to say self-centered desire, however well-intentioned, to measure learning.

I'd like to add to Miriam Kroeger's comment below, namely that she doesn't: 
think the tests used in Adult Education (we can choose from TABE, CASAS, WorkKeys and soon GAIN) are meant to assess learner outcomes in the sense of GED, employment readiness, post-secondary, etc.

Passing the GED and readiness for employment or post-secondary education are not, in my opinion, learner outcomes, but rather goals (provided they are indeed learner rather than imposed goals). Learner outcomes need to tell a rich story of what becomes of our students' lives - using EFF terminology as an example, in their roles as parents and family members, citizens and community members, workers, and, I would add, individuals.

Michael A. Gyori
 
Maui International Language School  


A quick reply to both Mora and Michael. First, thanks for participating and for posing interesting questions.

Mora, a small point not central to the question you were asking John Sabatini: The fact that a person got a GE 4 score on the TABE while not knowing the consonant sounds doesn't necessarily mean that his TABE score is questionable. In the ARCS study as well as in the Harvard Adult Reading Lab., Ros Davidson and I encountered a number of students who could not provide the consonant sounds in isolation, yet were able to
read connected text at GE 4. This was especially true of Spanish speakers who were literate in Spanish. Some of these learners appear to jump right into English decoding at the syllable level (because Sp. And Eng. are so similar at the syllable level), by-passing Eng. consonant sounds in isolation. This is not necessarily good for their Eng.
pronunciation and spelling, but it works for them - up to a point. And, I've also met some native Eng. speakers who were reading disabled and unable to provide the consonant sounds in isolation but who could, with much effort, make sense of GE 4 text. Scores much above GE 4 for either category are much rarer in my experience.

Michael, could you say more about the difference you see between goals and outcomes? As an aside, although each GED recipient's life trajectory will be different (some will reach life-goals and some may not), there is some evidence from NCSALL studies by Murnane and Tyler that a higher GED passing score (especially in the math subtest) is
associated with higher incomes a few years out. [See http://www.googlesyndicatedsearch.com/u/NCSALL?q=Murnane+and+Tyler&sa=NCSALL+Site+Search for a listing of Murnane and Tyler resources.]

John Strucker


John,

What you have said about isolating consonant sounds and yet being able to read at a 4th grade GE is very interesting, but this (non-ESOL) student tested at a first grade reading level on the TABE when he entered my program; and a separate assessment, a running record, also indicated he was reading at a late first grade level. I just wonder if he was so familiar with the TABE E in the previous teacher's class that he scored higher than
his actual level, and then after a long break he became less familiar with it and scored lower? Or more dubiously, did the previous teacher lie on the results? Or did she innocently make a mistake converting the number correct to the scaled score...it is easy to do.

Mora Larson

Reading Specialist

Charlotte Learning Center, VA


Greetings John and everyone,

The difference between "goals" and "outcomes" is one that, in my experience, is both crucial and easily confounded. In my many years in the non-profit sector in a previous life, the failure to understand the difference has also been a cause of many grant proposals having being turned down, especially by private funders.

Drawing on the example of a GED-preparation program, the goal is to prepare for, take, and pass the GED (of course, with a high a score as only possible). In the course of the program, there are both "inputs" and "outputs," which are comprised of the interactions that occur with other people and text as part and parcel of the directed learning experiences. Tests measure those inputs and outputs in one or more ways.

When a learner passes the GED, he or she will have attained the goal of the program, and for reporting purposes (e.g. the NRS), this "success" will be to the credit of the program or service provider. The need for further assessment will likely cease at that point, or will continue to "measure" progress towards new goals towards which other service providers (say colleges) attempt to steer their students (say an Associates degree). Even those learning experiences across learning settings are disjointed, and we attempt by means of "studies" to establish levels of correlation between one and the other, i.e., how predictive is performance in one setting of performance in another?

Standardized norm-referenced measures have "internal validity" at best. They measure progress towards and attainment of programmatic goals. The assessments are norm-referenced by choice, because they are designed with the bell curve in mind. As the population of test-takers changes, so do the expectations placed upon them. That is why a high school diploma, an undergraduate degree, or even a graduate degree often carry so little meaning, perhaps especially in the United States. The stereotype of a taxi driver holding a doctoral degree does have an element of reality at its core.

As long a test fails to capture what becomes of learners in real life, we will continue to fail to capture what, in my opinion, is the "ultimate" question: what becomes of those individuals once they attain (or do not attain) the respective programmatic goals? What kinds of somehow measurable contributions will they have made to their own futures, the societies they live in, and the world-at-large? Yes, the high-scoring GED test-takers likely end up earning more than the lower scoring ones? Bluntly, I have to ask, so what? Do they end up doing what they enjoy and is fulfilling for them and their surroundings? These are outcomes, and they are all post-programmatic.

In the absence of "external validity," or what we might call criterion-relatedness, I take all current efforts at standardized assessments with a huge grain of salt. As long as we fail to capture the results and consequences of educational interventions, we'll only continue to burden our teachers and students with mandates that engender anxiety in both, much to the detriment of a high-quality educational system for more than the few that "rise to the top."

Michael A. Gyori

Maui International Language School  


Morning. 
Mora,

As John Strucker noted, it is possible to score GE 4 on TABE with limited decoding skills (i.e., the ability to sound out novel words), because a learner can compensate with sight word knowledge (typically we find that U.S. educated adults who are very weak decoders still can recognize on sight anywhere from 4000 to 8000 common, frequent words in context), context cues, and problem solving skills.  Also, depending on which form of the TABE the individual took, there is a lot of error in the lower range, so a couple more items right or wrong (i.e., good guessing) can mean the difference between a GE 2.0 and a GE 4.0.  Furthermore, we can have a discussion later about what the grade-equivalent scale means for adults. 

More to the issue you raised, however, yes, it matters that the tests are administered roughly as intended and that the scoring is fair.  There's always a risk that the stakes of the test for the student, teacher, or program will influence the administration and scoring. There's nothing inherently wrong with teachers as test administrators, but the program should provide training and safeguards to ensure fairness for all.

KC,
At present, measures designed for predicting college success (e.g., SAT, ACT) are the measures with the most validity evidence for making such predictions. Those have generally been indicators of success at 4-year colleges and universities.  We and others are currently working on research to examine better indicators of the transition from adult literacy or community college developmental courses into community or 2-year college courses.  Surprisingly, that has not typically been a focus, but the national emphasis you note is having repercussions. 

Tests of course, have limited, incremental value in predicting success - so other measures are added.  Many other personal factors are likely as critical - study habits, time management, persistence, planning and interpersonal skills.  Actual performance in courses is a strong predictor, of course, but the goal of tests are to save time, resource, and cost.  From the point of view of the institution, it makes more sense to give a couple hour test (even with its uncertainty), than having a student fill a slot in a course only to fail and drop out.  Developmental courses in community college essentially serve this function.  If you cannot succeed in those courses, you have less likelihood of success in regular course work.

Michael,
Your question is at the heart of validation of scores and inferences.  In my view, the GED is both a goal and an outcome.  Is it is a goal in the sense that it governs a plan of study and skill attainment with a specified target to pass.  It is an outcome in the sense that it requires proficiency in skills (e.g., reading ability; problem solving) and knowledge (i.e., content in science, social studies, math) to pass.  As a credential, as John Strucker noted, there is research showing some of the complex relationships between getting a GED and seeing other life skills benefits such as improved income in the workforce.

Absolutely, we need more research tracking how and whether certain outcomes are linked to certain benefits. That's validity. 

Here's a nice reference:

Tyler, J. H. (2005). The General Education Development (GED) credential:  History, current research, and directions for policy and practice. In J. Comings, B. Garner & C. Smith (Eds.), Review of adult learning and literacy:  Connecting research, policy, and practice (Vol. 5, pp. 45-84). Mahwah, NJ: Lawrence Earlbaum.

John Sabatini


Thank you very much for your response. I wonder what safeguards might be put in place.

I am interested in what you have to say about what grade equivalents mean to adults. It seems we, adult educators, are constantly debating whether or not to discuss GE with our students. Many believe it is better to speak of scale scores only, and speak to their specific strengths and weaknesses. But then, what do you do if a student asks what his GE is, or comes in knowing his GE from high school?

Mora Larson

Reading Specialist

Charlotte Learning Center


Do any of you feel that using the Diagnostic Assessment of Reading as a measure of a student's skills in decoding, encoding, fluency and comprehension is valid for adults. Our program serves very low level literacy students, usually below GE4 on TABE and we use the DAR to provide the point of instruction for our teaching and tutoring. Is this use of DAR a model that would be acceptable for Community Based Literacy Organizations
who serve adult students who are English speaking and native to the US?

Carol Holmquist

The READ Center

Richmond VA




Spelling and Phonics  

A question for John Sabatini, as well as everybody else:  

In several descriptive studies - ARCS, Components and IALS, and the Canadian ISRS - we have noticed a strong correlation between tested spelling ability and tested reading comprehension. I've never been sure what to make of this. Is it just that good spelling indicates good word recognition, and that good word recognition contributes a lot to
comprehension in the ABE population, because so many of the learners have difficulties with word recognition?

Have others noticed a similar connection? For example, I know that Daphne Greenberg and Delores Perin looked at spelling and word recognition, and that their mentor, Linnea Ehri has investigated spelling and word recognition in children's reading.

Practitioners, what are your thoughts about this? Do you assess spelling to probe phonics knowledge? For some other reasons? Do you teach spelling, and, if so, what do you see it contributing to learners' reading improvement?

Best to all,

John Strucker


John,

My thoughts on the issue:

I always test my students using Kathy Ganske's Developmental Spelling Assessment. (found in Word Journeys: assessment -guided phonics, spelling, and vocabulary). I teach spelling using the Words Their Way or Word Study approach which is designed for children but, I have found, works just as well with adults. It is an analytical phonics program that is cheap to create using manila folders and word cards, and fast and easy to teach in small groups. It is based on twenty years of research and was an integral
part of UVA's reading education program. Many public schools in Virginia have embraced it.

From this assessment I can, more often than not, figure out the student's approximate reading level and what phonics elements he has and hasn't mastered, and what he is using but confusing. I can then come up with a game plan for instruction, giving sentences to write using focus patterns and looking for these patterns in reading, and quizzing weekly. My adult students seem to respond well to this because it pays attention to the areas
in which they need improvement, and it helps their writing and reading.

I think that if reading students are taught phonics through this developmental spelling program they will become more fluent readers as a result. There is research to back it up. By "developmental" I mean that readers are categorized into Emergent, Letter-Name, Within-Word Pattern, Syllables and Affixes, or Derivational Relations word study stages; and the theory is that they will move upwards naturally through the stages. I can always guess a student's approximate reading level based on their spelling stage. I do test them with other measures, like word recognition in isolation and in context, comprehension and the TABE to get a more accurate measure of their skills. I also always test phonemic awareness.

If I had only five minutes to assess a new student I would definitely choose this spelling inventory over any other measure.

Mora Larson

Reading Specialist

Charlotte Learning Center


Wow, Mora, your approach sounds great! Very thorough. I have also found adults quite willing to work on spelling, because it is an aspect of their literacy that embarrasses them.

John Strucker


Hi all,

This is just my personal opinion, but I believe part of the problem with both comprehension and spelling relates to the "sight word" method of teaching. Sight words are recognized irregardless of spelling...spelling and phonics are left behind...at least in terms of the many many "sight words." They are simply to be known "on sight."

Yet the sight words form the foundations of many non-sight words...and not recognizing the word parts...and the sounds they make...spelling suffers. Phonics may be a slower way to start, but it stands one in better stead in the long run when one encounters new written words, and words that look very similar to familiar words. I don't know how many times I've heard students "guess" a word that is similar in structure to the word that is printed ...but the meanings are so far apart, it's no wonder that comprehension goes out the window. Whenever a student reads material that is on the outer edges of their personal vocabulary...(which they should often be doing in an educational endeavor) they are adrift to decode words that they do not recognize on paper, even if they "know" and use the word is speech.

We taught our children phonics. And (as young adults now) they all read very well for meaning. Sight words come even for those who use phonics...with reading. Reading is encouraged when you can understand...and understanding (IMHO ["in my humble opinion"]) comes when you can consistently and correctly decode the words. Adult GED / AEL students know and even use many-many more words than they can correctly and consistently decode from the printed page. They have years of speaking and hearing and
doing so with good understanding. What many do not have are years of enjoyment brought by reading...because when the guessed words add up to non-sense...reading stops.

This anecdotal, not study based, but none the less held opinion is brought to you by

Dave Fowler


Such a great discussion of teaching reading and spelling! In her book, English L2 Reading: Getting to the Bottom, Barbara Birch makes the case for combining bottom-up strategies (phonemes, morphemes, letter sounds, etc.) with top-down strategies that teach sight words and meaning.

I really believe that Birch has it right, especially with adult students struggling with literacy. She advocates for using, not sounds in isolation or morphemes/graphemes in isolation, but using them with the reinforcement of visuals and words in pattern to slowly develop word recognition, sound recognition, and the ability to write what's heard.

Kat Bennett

ESL Instructor/Learning Needs Coordinator

St. Vrain Valley Adult Education, CO


Sure--a well-rounded approach that incorporates the best of all of the practices we're speaking of makes the most sense for most students--so that they can get the gestalt (pardon my bad German spelling!).

Stephanie Moran 

 




Teaching Reading at the Phonemic Level 


I have been reading all of your comments in this discussion group. I am Kathy Brezina and am newer to the Adult Education than most of you. I do come to the Adult Ed. program with 32 years of teaching elementary education, especially primary grades and with a master's in Reading. Teaching the primary grades great emphasis is placed on reading instruction. Interestingly many of the questions you are wrestling with here are what primary teachers face also. In my experience and limited research background, mastering sounds in isolation have little correlation to good reading skills. I am not saying that sounds shouldn't be taught, but mastering them is not a must. English has too many exceptions for the rules, word awareness, context clues are sometimes more beneficial. These are just my thoughts.

Mora, I used Words Their Way [see thread entitled ‘Spelling and Phonics'] in elementary school and felt it was excellent. I can definitely see its value for adults in pinpointing weakness in spelling.

Kathy Brezina

Camp Verde Adult Reading Program


Kathy Brezina wrote:
I am not saying that sounds shouldn't be taught, but mastering them is not a must.  

It has been my experience that a straightforward 'phonics' approach (teaching, in particular, that spelling should be attacked by 'sounding out') is sometimes negative in ABE because many ABE students were switched off while this was going on in their early education and they are stuck with just this one approach which they apply with limited success, reinforcing the 'helplessness' vis-a-vis English. Our language is too phonetically irregular for this single-arrow approach and, anyway, fluency consists of
switching from a phonic attack to a visual one. (Text being a visual signal.)

In my experience a degree of phonemic awareness is lacking but necessary in
most ABE students but this is very easily addressed IMO ["in my opinion"]. (There being nothing wrong with the brain - it's just a matter of training it to use better
criteria, 'sharper listening', in respect of spelling than it customarily does for conversational purposes, where wishy-washy is fine, indeed almost essential.)

Hugo Kerr


I would have to agree with the phonemic awareness piece. Especially with ESL students. There are schools of academic thought that de-emphasize the importance of spelling in both ABE and k-12 students. however from my experience it works well in picking up new words more readily once phonic mechanics are learned through spelling. I currently teach fourth grade and AEL after work so.........

Kwame A. Mensah


I echo Kathy Brezina's comments regarding teaching sounds in isolation. I come at this question as both an adult educator and as a mother of two very different boys, who are now also adults. First, the boys. Son #2 was an early reader and picked up on spelling rules and exceptions at an exceptionally early age. Son #1, however, was not an early reader and struggled with reading.

This was the era of Whole Language and every
teacher was adamant about letting him be "creative" with his spelling. The result? A 30-year old who cannot discriminate between weak and week, for example. The long ee sound in isolation was what he heard, but he had no concept of why there could be two vowel combinations that would represent that phoneme. When he hunkered down to simply practice and memorize spelling, his spelling vastly improved.

As a teacher of adult students, I never isolate sounds or morphemes, unless there is a serious misunderstanding. One of the first tenets of adult learning is that the material needs to be relevant and comprehensible. I use word families and words in pattern to establish comprehension of morphemes and graphemes. For lower levels, pictures and
realia are powerful reinforcements!

Kat Bennett  

St. Vrain Valley Adult Education

Longmont, CO


Poor spelling is a red flag for the reading disability, dyslexia. Once we introduced a program designed for our learning disabled adult students we began to have success in producing students who could progress from 3rd-4th grade reading levels to high school levels in 1-2 years. We are now successfully graduating adult readers for the first
time in our 20 year history. All the reading research of the last 30 years confirms that phonics instruction, such as an Orton-Gillingham approach (or an O-G influenced program) should be an integral part of reading instruction for adults. How can one spell if he doesn't know the sounds of consonants and vowels? Unless of course he has an
outstanding visual memory which most of our students don't have. They are simply motorizing a string of letters.

When I became knowledgeable in the methods of teaching students with dyslexia and introduced the appropriate programs for those students, we made progress for the first time.

We take students with the lowest reading levels who are not successful in most standard adult basic education programs and use a very intense multi-sensory structured, explicit, and systematic phonics system. If an adult student reads at a 6th grade or higher reading level, we use a different approach.

For our lowest level readers, once they are in the dyslexia program, our students become readers and spellers since the system works together without having to memorize. Our attrition rate for those students and volunteer tutors is less than 1%.

Becky Manning


Greetings Becky and everyone,

Would you kindly elaborate as to what it means for students to become readers and spellers, specifically what they are able to read with understanding and the broader context of their spelling skills, i.e. written expression?

Thank you,

Michael A. Gyori

Maui International Language School  


Our students (any age) enter our program with low level reading skills, generally around 3rd -4th grade. After completion of our reading curriculum, the students can read any where from an 8th grade or higher reading level (they do not bottom out). That is confirmed by assessments but also by the very real daily experiences the student has
with text. The student is taught how to break words into syllables for reading and spelling. Once correctly broken into syllables, the student has the tools to decode, syllable by syllable. Only sight words have to be memorized.

The student spells syllable by syllable and is able to improve his written comprehension by writing what he wants to convey instead of using the one syllable words he is used to using. Of course, that type of written composition does not reflect his intelligence. Also, the Franklin Spelling Ace is another tool that is taught and used throughout the series so that a student has the ability to look up a word for example; when to use ir, er, ur, would be (?r) in the Franklin Spelling Ace #SA-206S.

In short, the student is able to read and understand just about anything he encounters in his daily life. If a student completes all 10 books in the phonics curriculum, they will have learned all there is to know about our English language. The last two books are "Influences of Foreign Languages" and "Greek Words & Latin Roots." The system works
with reading and spelling in tandem. What is read is spelled and vice versa.

I've been through all the goal setting, motivation, relevant materials etc. for adult learners (none of which work) because they don't have the tools to decode and what they really want to do is learn to "READ" in a manner that makes sense to them without relying on their weakness of memorization.

Except for the few students who intuitively learn the English language and can become readers, most students I work with need this type of instruction to become good readers. It's the old phonics vs. whole language vs. balanced curriculum etc. etc. etc. I use what works for my students and I am producing "readers" who can function at a high level
in society. For the most part, these are the students who spent most of their schooling in special education and were written off by the school system and made to believe they were dumb and stupid. Everyday we prove that isn't true!

Becky Manning

 


Old argument over the best way to teach reading, but for people with the type of dyslexia where they do not/can not distinguish among the 44 English phonemes, we believe that mastering individual sounds is essential for them-otherwise, they continue to get lost at the multi-syllabic level. Without working at the fundamental phonemic awareness level, all we may be doing is helping students to memorize but not helping them to become
independent decoders at an adult level and thus truly independent readers. Many programs have shown the validity of this approach-Lindamood-Bell, Orton-Gillingham, etc.

Stephanie Moran


Stephanie,

Where do you get 44 English phonemes? A phoneme by definition is a sound that can change meaning in complementary distribution within pairs of words, such as /t/ "toe" and /d/ "dough" /to and /do/. I count 38 in American English; 24 consonant sounds, 11 vowel sounds and three phonemic diphthongs. Also, statistically, Spanish speakers, for example, have many more problems distinguishing and producing at the phonetic level with allophones that occur in different word environments, even when both English and Spanish have certain phonemic similarities.

Ted Klein


Hi,

I'm very encouraged by the discussion of spelling. What I'd like to discuss is how spelling patterns morph into morphology. So, what we think of as basic decoding (the Grade 1-2 kind), covers things like consonants, vowels (those crazy, crazy vowels), blends, digraphs, and such, which tend to be serviceable for regular words, not so much for irregular words, dependent on decent phonemic awareness skills, and there are many inconsistencies as you increase the range of vocabulary. There are also the weird little spelling conventions, mostly designed by Celtic Monks about a thousand years ago, that we just have to learn (e.g., 'wh', not 'w', 'battle', not 'batle', silent e, 'ght', etc.). By some experts, this kind of grapho-phonemic knowledge works best for Anglo-Saxon words which tend to be the shorter, more frequent ones in the language, and therefore more commonly learned in grades 1-3 (e.g., Calfee, 2005). There are regularities, just not hard and fast rules.

But it is important to remember that the English language is also morpho-phonemic (or morpho-graphemic, if you wish). That is, there is a preservation of spelling (and sound patterns) that serve as clues to the meaning. English is a polyglot of language influences with borrowings from around the world. [Have you ever noticed all the cognates when one learns Spanish, German, or French?] When the words are or were adopted, there were often shifts in spelling and shifts in sounds. But there is remarkably a great deal of consistency at this level. The most important influence for many of the content words of academic English and information content reading from 4th grade level on out are of Latin-Greek origin. There you have the classic prefix, root, suffix structure (e.g., structural, constructive), which allows you to identify the syllable structure, form and manage in working memory a fluent pronunciation of often long, multi-syllabic words (indefatigable), infer meanings of similarly structured vocabulary, determine syntactic role (hence read more fluently), etc. So, once you know 'general', you can generalize. Generally. True, roots and affixes can be deceiving and inconsistent as well (e.g., flammable, inflammable), but now you have students studying the language, not merely spelling or phonics. And that's a key to vocabulary growth and reading skill.

And it is quite a fascinating story (see reference below by the late Richard Venezky, for one).

So, the phonics of early grades is merely the appetizer to the entrée of morphology for the remainder of one's reading life.

Well, we wish it were that simple. In research studies of ABE adults, making progress in helping them to become fluent, automatized word recognizers is challenging, as you all know from your program experience. Still, at least one can be somewhat consoled that the enterprise of learning spelling need not merely be considered 'something you teach to little kids.'

Here are a couple references I enjoy (and the source for whatever I was accurate about above):

Calfee, R. C. (2005). The exploration of English orthography. In T. Trabasso, J. P. Sabatini, D. C. Massaro & R. C. Calfee (Eds.), From Orthography to Pedagogy: Essays in Honor of Richard L. Venezky (pp. 1-20). Mahwah, NJ: Lawrence Erlbaum.

Venezky, R. L. (1999). The American way of spelling: The structure and orgins of American English orthography. New York: Guilford Press.

Oh, and if you get a hold of the book that has the Calfee chapter, check out this chapter as well:

Weber, R.-M. (2005). Phonological variation and spelling. In T. Trabasso, J. P. Sabatini, D. C. Massaro & R. C. Calfee (Eds.), From Orthography to Pedagogy: Essays in Honor of Richard L. Venezky (pp. 21-36). Mahwah, NJ: Lawrence Erlbaum.

John Sabatini


Gee, who'd have thought spelling would get this much going!

John Sabatini, I like the way you put it that especially in English, spelling has a strong relationship to comprehension through morphology. [Morphemes being the linguistic units that carry meaning.]

If you've ever watched the National Spelling Bee on TV, you notice that in addition to the pronunciation of a word, the contestants are always given the meaning of the word and its derivation, because both of those, through morphology, can offer clues to the word's spelling. In languages with transparent orthographies (like Spanish), a spelling bee
wouldn't be very exciting (except possibly for 1st and 2nd graders), because in Spanish one ought to be able to spell almost any word once it is pronounced (with a few exceptions for loan words).

While on the subject of spelling bees, when the ace spellers in the National Bee miss a word, it's usually because they chose the wrong vowel to represent the schwa or /uh/ sound, which can be written with any vowel. If you've encountered the word in print, you may be able to recall what vowel is used. If you're not familiar with a word, you can
try to make guesses about what vowel to use for the schwa based on similar words that you have encountered. Usually you've encountered words through wide reading, and we know wide reading builds...that's right...comprehension.

I think that's also why among word reading tests, the WRAT [Wide Range Achievement Test] correlates higher than others with comprehension. As you get into the harder words
on the WRAT, their correct pronunciations become less regular and predictable and more dependent on having encountered them through reading in content subjects. Think of the final word on the old WRAT II, "synecdoche." Until the movie "Synecdoche, NY" came out, for the most part only English majors (and I wasn't one!) knew how to pronounce 
this word correctly. (Synecdoche is a figure of speech meaning "using a part for the whole," as in, "I got my *** out of bed." )

Best to all,

John Strucker


Hugo Kerr wrote:  
My own view was we should relax rather than reform, that we get far too excited about spelling.

GED holistic scoring of the essay writing provides for the relaxation of the students, to some extant. "We", teachers, experience much more excitement about the issue. It took GED testing training and scoring of pre-GED essays to realize how much less time I could spend on teaching spelling and concentrate on the organizational skills instead.

On the other hand, as soon as students understand scoring system, they tend to let spelling sail freely. I guess, the main teacher's challenge is to find that golden ratio between teaching/relaxing.

Tatyana Exum


Tatyana Exum wrote:  
I guess the main teacher's challenge is to find that golden ratio between teaching/relaxing.

Which is, of course, another reason why a real teacher, like Tatyana, will always be immeasurably better than a script or scripted 'delivery system'. My own experience with ABE students has been that taking some at least of the eye off the spelling ball enabled students to begin to write. Slowly, the spelling thumbscrews can be re-tightened to suit the student's pace and confidence!

Hugo Kerr


Hi,

I'd like to consider again the relationship that John Strucker pointed out between spelling, decoding, word recognition, and reading literacy. Assessments are usually samples of behavior that are clues to underlying knowledge, skills, and abilities. We think that the spelling score (and its correlation to other scores) is signaling that an individual has
spent time in their lifetime paying attention to processing words more fully than others with lower scores. When they read, they attend to the full spelling patterns and learn them. Over time, they have become accurate and efficient recognizers of words, built up their reading vocabulary and are good readers.

The study doesn't tease apart cause and effect. We don't know how students became better spellers. We don't know whether students in the study pay attention to spelling in their own writing. We don't know whether attending to spelling as an end in itself in instruction is going to help them improve their decoding or word recognition or reading
or writing. My guess (hope) is that focusing students on spelling during instruction, focuses them better on paying attention to the spelling of English words in general as they read (and write), improving their lexical representation of words, which should make it easier to learn more words and read more fluently and efficiently. At least that is the
hoped for outcome.

John Strucker may have a better sense of the type of words and type of errors associated with different levels of performance in studies, and therefore some insight on the topic.

John Sabatini




Reading Rate and Assessment


Hi Marie Cora and John Sabatini,

Marie, thanks for moderating, and John Sabatini, thanks for providing us much food for thought.

I have two more questions for John Sabatini to bring us back to assessment:

Reflecting on the reading assessments you have developed for your reading research projects or that you have contributed to developing for NAAL and PIAAC [Programme for the International Assessment of Adult Competencies] - do any of these have the potential to be useful for either diagnostic testing at the program level or as outcome measures?

Also, you have thought a good deal about reading components measures that incorporate speed. Is this something practitioners should be thinking about in their use of diagnostic assessments?

Finally, thanks to all of you who contributed your ideas to this ASRP [Assessment Strategies and Reading Profiles] -sponsored discussion. Please take a few minutes to visit the new site if you haven't already done so, and let us know what you think.

Best regards,

John Strucker


Reading rate is certainly a critical factor in most assessments and all that are timed, as the GED is. We have students who could pass the test--if they had no time constraints. That isn't real in the real work/college/tech world, so we work on rate/fluency as well as decoding and vocabulary.

Stephanie Moran


Hi John,

Re: instruments

The NAAL and PIACC instruments themselves are secured for those purposes. I think they both provide frameworks and models that should be considered for program use, but the actual measures are probably not going to be available for public use.

The measures we have been developing as part of the research we have conducted do have potential for those purposes. Generally, the tests we have developed show strong properties in our studies, but we do not have enough technical information to create norms or otherwise recommend specific valid uses. We continue to work with programs as research partners to collect that kind of evidence. Hopefully, we'll get there.

Re: Speed

In all our research, we generally use speed or rate as a proxy for indicating ease, automaticity, fluency, or efficiency of text processes either at the word, sentence, or passage level. Although there are always - I repeat, there are ALWAYS - exceptions, most skilled adult readers operate within a pretty consistent speed/rate range. This rate
makes a lot of sense based on what we know about working memory and language processing. It is probably no accident that we can read and understand what we are reading at least as fast as we can listen. And if Tom Sticht is still tuned in, he can point to evidence for why we can sometimes read even faster than we typically listen (aud) (because when you speed up speech, we still understand it).

By and large, the world of literacy activities is somewhat calibrated to this rate. We want to read subtitles of movie text in real time; we want to be able to read most of the content in a college course in the time allocated; etc. The rate of skilled readers is around 250 wpm in silent reading, about 175-200 oral rate, and about 200 ms for each word. Now there's a lot of variation around those averages. We slow down when it gets difficult, we stop to think about things, we skip ahead when we know stuff well, we skim or scan (and pick up speed) doing so. However, give an average skilled reader an average everyday text (e.g., newspaper, magazine, popular bestseller novel) and you'll get a reading
rate around that range.

Several measures, including ones I've developed, take advantage of this relationship to create relatively short measures (i.e., 1-3 minutes of reading words or continuous text), and we get a lot of information about where an individual is relative to a reference group of skilled readers. I consider it a humane version of testing vs. subjecting them to longer measures for roughly the same purpose, which is why it works especially well in large surveys and other purposes in which one is aggregating results for a group.

And when considering individual profiles, what we observe when examining the reading rates of adults who struggle in reading is that their max rate for fluency or understanding rarely reaches any of the above rates. We believe the slow rate is a symptom of struggle - they are expending effort decoding or recognizing words. They are overloading working memory, because they are more likely to forget things in short-term storage. Etc. [Unlike us skilled readers, they did not simply slow down to understand the text better or more deeply.] I'm not saying one cannot calibrate one's cognition to slower rates, but that is
itself an exceptional achievement. It is probably easier for the brain all around to read at a nice, fluent pace.

But I haven't addressed the question yet. Well, I think we'd like to see that as an adult learns to improve accuracy in their text processing skills - decoding, word recognition, fluency, and comprehension, that they also get faster without losing any accuracy. Accurate + speedy = efficient. So, I think it is important to monitor, and I think the
adults themselves would benefit from being efficient readers. But like any skill development (e.g., I play guitar), there's a time for practicing for accuracy, and there's a time for building up one's speed.

Finally, as the other strand of discussion points out, increases in speed and efficiency are likely to vary with age, practice, and other cognitive factors. So, for practical purposes, it is worthwhile to assess, monitor, and help an individual to gradually improve their own
efficiency (relative to where they started). At the same time, it is important to recognize that some individuals are likely to show progress on this dimension at a different learning rate than others.

A couple more references here. Please visit the ASRP site.

Best,

John Sabatini

Carver, R. P. (1990). Reading rate: A review of research and theory. San Diego, CA: Academic Press.

Carver, R. P. (1997). Reading for one second, one minute, or one year from the perspective of rauding theory. Scientific Studies of Reading, 1(1), 3-43.

Rayner, K. (1997). Understanding eye movements in reading. Scientific Studies of Reading, 1(4), 317.

Rayner, K., Foorman, B. R., Perfetti, C. A., Pesetsky, D., & Seidenberg, M. S. (2001). How psychological science informs the teaching of reading. Psychological Science in the Public Interest, 2(2), 31-74.


Dear John,

But isn't it possible that some people read slowly because they also THINK about what they're reading, rather than just decoding it? Some of the brightest people I know are slow readers, because they're critical readers in this sense. Does this have any place in your schemas?

Forrest Chisman


Hi Forrest,

Absolutely. I noted this in my comment. There is simply a profound difference between 'apparent' slow reading because one is reading and thinking, versus reading slowly because an individual does not have the skills to do otherwise. We have research designs and technologies that help us as researchers to examine the differences. Furthermore, there is no claim that a person who cannot read well therefore also cannot think well. A person can listen and think slowly as well. A person can have no reading skills and think profoundly (at any rate). But skilled readers do not, as a rule, slow down much or often to decode or to recognize the words. In fact, the conscious experience of reading for most is of meaning only, not the words, spaces, punctuation, and sentences. It is a flow of meaning. One does not want one's thinking interrupted by the words. It is not a helpful type of self-consciousness. So, a slow rate is an indicator that must be validated with other evidence before drawing the conclusion that it is solid evidence of low skill.

Thank you for giving me an opportunity to clarify.

John Sabatini


Hi Forrest,

Adding my two-cents to what John Sabatini wrote earlier, Marilyn J. Adams (she of Beginning to Read...) once remarked that she reads rather slowly, a fact that she attributed being a math wonk originally - meaning that she reads sentences as if they are propositions. One could imagine lawyers, jurists, or scientists reading slowly for similar
reasons. Nevertheless, Marilyn's self-described slow reading is still much faster than that of disabled readers (like many ABE learners) who read so slowly that it impedes their ability to understand and think about what they are reading. Moreover, I'd be willing to bet that while these high functioning slower readers read text more slowly, that unlike
ABE learners, they are capable of decoding words on lists correctly at average or above average speeds.

Be well, Forrest.

John Strucker


Greetings Forrest and all,

You ask an excellent question, Forrest, about reading speed. I, too, am a slow reader, and often interact with text repeatedly to construct meaning. Further, by all measures, I am a very proficient reader, too.

The exceptions that John alludes to need to be "heeded" more than in passing. Otherwise, we will continue to run the risk of assessments, especially high-stake ones, of doing even gross injustice to some who are being assessed.

Here we have one more reason to go back to the foundational assumptions guiding standardized assessment measures that, in my opinion, trigger at least as much havoc and anxiety as they do in capturing meaningful pictures of test-takers.

For all the humility in assessment some proclaim, its increasingly blatant exercise of systemic power and control (in education) suggests something less than humility.

Michael

 
Michael A. Gyori


I support Michael's thoughts regarding timed tests. I agree with the basis for timed tests in principle, I wonder about their validity with not only dyslexia or other reading disabilities, but also those learners with emotional/anxiety impairments.

Throughout the years learners have done VERY poorly on a timed assessment when the instructor and staff believed that the learner was well prepared and knew the material. On many occasions I have discussed the test with the learner and discovered extreme anxiety in the face of a test, especially with time constraints. In several such occasions I have administered an alternative form of the assessment with the instructions that it would not be timed, and they could take breaks whenever the stress/anxiety started to keep it from escalating.

More often than not, the learner was able to maintain better emotional control and did much better on the assessment. This in my estimation demonstrated that they did indeed have the knowledge/skill to do well.

In some cases this knowledge and ensuing confidence was all that was necessary for the learner to do well on the next high stakes assessment. For others it took quite a bit of practice to build adequate coping skills to succeed on the high stakes assessment. And finally for others, the pursuit of a documented disability to provide the necessary accommodations for the test was pursued. Some succeeded and moved on. Others were unable to get the documentation and continue wallowing in the misery.

For this reason, I am opposed to inflicting our population with more strictly administered standardized assessments that are created for a fictional world of round pegs. For both youth and adults there are far too many square, triangular, oval, star-shaped and irregular pegs for whom these assessments are a nightmare and are of little use/validity with the non-standard, non-round peg learners. The majority of the round peg learners succeeded in the round-peg K-12 system. As a practitioner I will be much more excited about differentiated means of assessment than more of the same strictly administered standardized assessments

Jim Schneider

[See thread entitled 'Test Accommodations' for further discussion]


Hi John Sabatini, 

My question about NAAL and PIACC wasn't clear. I realize that these tests themselves are secured. I just wondered about the frameworks and approaches they represent and which if any of them might find their way into practitioners' hands. For example, did anything that was learned designing components assessments in NAAL suggest ways for improving the ways we currently assess components at the ABE program level? With
regard to both NAAL and PIACC, should ABE programs be using assessments that contain more real-world literacy items? As an observation, I'm always surprised that the PDQ [PDQ Profile Series], the NALS-like assessment Henry Braun helped to develop for ETS, isn't being used more by programs.

Regards,

John Strucker 


 Hi John Strucker:

The International Adult Literacy Survey (IALS), the new Adult Literacy and Lifeskills (ALL) survey, the National Adult Literacy Survey (NALS) of 1993 and the new 2003 National Assessment of Adult Literacy (NAAL) all used "real world" tasks to assess literacy ability cross the life span from 16 to 65 and beyond. Such test items are complex information processing tasks that engage unknown mixtures of knowledge and processes. For this reason it is not clear what they assess or what their instructional implications are
(Venezky, 1992, p.4). The same is true for the CASAS and most other reading tests that present "real world" (and some academic world) complex information processing tasks. It is difficult to know what to teach based on the results.

In the "academic world" while assessing vocabulary as in the TABE is relatively simple in terms of the tasks proposed, it is generally considered to be cheating if one takes the vocabulary words from the TABE and drills students on them. But when one gets a low score on the vocabulary part of the TABE, what should be taught? There does not seem to be a generic approach to teaching vocabulary words without teaching the words themselves or making sure they are used in contexts that the students will encounter a sufficient number of times to learn them.

In general, there appears to be a tendency to think that in assessing reading students should be tested on things they have not been directly taught to assess "generalization" or "transfer of learning" rather than testing to make sure that what was taught was learned.

Strange, huh? 

Tom Sticht 

Venezky, R. (1992, May) Matching Literacy Testing with Social Policy: What Are the Alternatives? Philadelphia, PA: National Center on Adult Literacy.


 

Again, a very thought-provoking post, Tom. To step back, in my view you have raised the issue of to what degree various assessments and other factors predict life outcomes that we care about.

To depart from adult literacy for a moment, consider college success expressed in terms of graduation rate and GPA. From what I've read, college success is predicted by high school GPA (the result of teachers' criterion-referenced tests), SAT or ACT scores (generalized measures of cognitive ability), and then a host of non-test factors such as gender (females complete at a higher rate than males), what one majors in (engineering majors drop out at a higher rate than, say, communications majors), and whether one has to work a lot of hours while also attending college. Some studies have shown that the non-test factors account for more of the variance in college outcomes than the test factors, but knowing that would not lead us to tell kids to blow off their school grades or SAT scores.

So, it's likely that for adult literacy students their performance in classes (where IMHO ["in my humble opinion"]) teachers don't give them enough criterion referenced tests on the material taught) and their performance on generalized tests (TABE, CASAS) ought to predict how well they do on the GED, along with other non-tests factors that some of the participants in this discussion have mentioned. And, there's some evidence (not as strong as we'd like) linking higher GED scores to improved life outcomes - just as there is evidence linking IALS scores above at level 3 and above to improved life outcomes across all of the OECD [Organisation for Economic Co-operation and Development] of  countries.  

And, by the way, when it comes to transfer, one of the most interesting findings from your research with military personnel was that the literacy improvement that took place in the context of job skills transferred to improvement in more general literacy. It would be nice if today's research community followed up on your work. For example, does that transfer take place equally at all levels of literacy? Or is it greater and more salient at some levels more than others? (As I recall, your work focused on approximately GE 5-ish MOS [military occupational specialty] skills.)   

Anyway, thanks so much, Tom, for your participation. You have enriched the discussion.

Warmest regards,

John Strucker


Greetings Tom and everyone,

Your statement that follows certainly resonated:

In general, there appears to be a tendency to think that in assessing reading students should be tested on things they have not been directly taught to assess "generalization" or "transfer of learning" rather than testing to make sure that what was taught was learned.

We find ourselves in quite murky waters here, do we not? At least the content validity of criterion-referenced tests is much easier to establish due to the transparent correspondence between what is taught and what is assessed. Of course, the criteria themselves need to be scrutinized carefully and repeatedly.

I believe that we must tread very lightly when assuming that test items in norm-referenced measures meet the criterion of generalizability. What we "transfer" to from an existent knowledge base to a presumed sample of that knowledge (and then we get into constructs!) likely varies as much as the unique snowflakes others have metaphorically alluded to.

There are key differences between the social and physical sciences, with behaviors in the latter being much more predictable (even if falsely) than in the former.

 

Michael A. Gyori

Maui International Language School  




Motivation


I come at this question as both an adult educator and as a mother of two very different boys, who are now also adults. First, the boys. Son #2 was an early reader and picked up on spelling rules and exceptions at an exceptionally early age. Son #1, however, was not an early reader and struggled with reading. This was the era of Whole Language and every teacher was adamant about letting him be "creative" with his spelling. The result? A 30-year old who cannot discriminate between weak and week, for example. The long ee sound in isolation was what he heard, but he had no concept of why there could be two vowel combinations that would represent that phoneme. When he hunkered down to simply practice and memorize spelling, his spelling vastly improved.

As a teacher of adult students, I never isolate sounds or morphemes, unless there is a serious misunderstanding. One of the first tenets of adult learning is that the material needs to be relevant and comprehensible. I use word families and words in pattern to establish comprehension of morphemes and graphemes. For lower levels, pictures and
realia are powerful reinforcements!

Kat Bennett

St. Vrain Valley Adult Education

Longmont, CO


Katharine Bennett wrote:
When he hunkered down to simply practice and memorize spelling, his spelling vastly improved.

I have a daughter who spells well and a son who doesn't (both 30s now). The daughter who does, like me, cares about it, the son, like his mother, doesn't. Does anyone think this lack of motivation (with which I have to say I have some sympathy) is a relevant effect in the classroom?

Hugo Kerr


Hello Hugo and all,

The effect of motivation or its lack manifests powerfully in a classroom. I, too, have sympathy for a lack of motivation. It is part and parcel of affective issues that we need to heed far more than we currently do.

Michael A. Gyori


I find that a student or my own lack of motivation tends to come from an inability to form an attack plan that will gain me enough benefit for my effort.

Carol King

Fernley Adult Education


I find that most of my students don't need motivation (they have already come to me). What they do need is a program that produces results, quickly. Traditional education doesn't work with most of these students-been there, done that.

Becky Manning


Michael Gyori:

I agree with you, although in this case we mean different things by 'affective' (and my use of the word in my post was lazy).

I meant that there are people who really don't care much about spelling! My wife and son are almost totally indifferent. They take the news that there are, say, two 'm's in accommodation very calmly and then, tomorrow, write 'accomodation' without a care in the world. My daughter and I seem to care and will react to being told about spelling. I have an idea that both views are valid! (Shakespeare spelled really 'badly' as did queen Liz I [Queen Elizabeth I].  Shakespere even spelled Shakespear different ways. Nobody seemed to mind.)

We had a long debate about spelling reform on another list. My own view was we should relax rather than reform, that we get far too excited about spelling. I know that many adults won't write because they fear their spelling will be ridiculed (and it might well be). It's not a very important aspect of literacy (and may be swept into oblivion soon anyway by txtng of crs ["texting of course"]).

Hugo Kerr


  

Greetings Hugo and all,

My definition of "affect" includes "not caring." How we react to affect is as important as the affect we are reacting to. In the end, a lot of the stuff some, almost obsessively, get stuck on really doesn't matter in my opinion, either.

This is one more reason to point out that stressors and their alleged "measurement" is but one facet of all that goes into living and education...

Michael A. Gyori

Maui International Language School

 




The Qualifications of Special Educators



 

Hello,  

I wish some members would be more careful and/or thoughtful when lumping special educators into this group of non-caring, woefully under-prepared, and ready at the get-go to boot students who do not learn at a fast rate type teachers.

I have yet to come across more than one teacher like this in special education. I have on the other hand, come across more professors at colleges that were living this attitude not only with someone who had/has a learning disability but anyone who took a bit longer than what they were willing to put up with for understanding of material.

I do not put college professors into this sum of their peers though. I am a special educator. I work very, very hard as most of you do, to teach students to read, write, and gain mathematical skills. I put great thought into IEP's [Individualized Education Plan], I work very hard to have positive relationships with parents/guardians, and I spend almost every weekend putting together lessons and/or writing IEP's and all of the associated forms. I interpret formal and informal assessments, I administer assessments, score, and report on these assessments. I chose this profession because I get more out of it than I am able to give. However, it is exactly this attitude that has me second-guessing my choice of professions.

I take personal responsibility for students that do make the expected gains and I work with regular education teachers to see what else can be done to have the student make gains, especially with reading gains and acquisition of skills. I teach in a resource room setting and see students on average for 90 minute per day.

I am not an anomaly. I am currently at my fourth school and the special educators I worked with at these schools cared as much as I did with the exception of one teacher. Most work incredible hours and spend time-off learning how to become a better teacher, write the forms needed for the IEP meetings because there is not enough non-teaching time in a week to put together one meeting. I do not know one special educator that does not spend at least part of one weekend day working. Most like me, spend most of their weekend researching, putting together lessons, etc.

Please - remember how hard we all work; adult educators, college professors, and regular education teachers. It is so offensive to have the below written by a fellow educator of all people.

To put us into a group of people that makes students feel stupid and dumb is irresponsible. I've been tempted to write before to defend my profession, but it was this comment that made me feel so offended that I had to write.

Kathy Moulton


Greetings Kathy and all,

Thank you for feeling free to express your frustrations! I am struck by (and have, regrettably, frequently experienced) the emotion in your message below.

My question remains, prompted by your message (and others' as well), namely, what it means for students to become readers and spellers, specifically what they are able to read with understanding and the broader context of their spelling skills, i.e. written expression.
This is a question I have repeatedly asked of subscribers to the Learning Disabilities list, too, in my quest to get a better idea of what the accomplishments "look like" for those who believe they have experienced substantial gains in their learners.

I have no doubt whatsoever that [you] take personal responsibility for students that do make the expected gains and that [you] work with regular education teachers to see what else can be done to have the student make gains, especially with reading gains and acquisition of skills. I teach in a resource room setting and see students on average for 90 minute per day.

What I do remain concerned about are precisely the expected gains students are expected to make, all the more so due, in part, to the time allotted for such gains to occur. My issues with especially federal education policy, ones I have repeatedly expressed, bear testimony to that.
May these discussions continue without the perception that any group of individuals is being singled or characterized in any collective fashion, at least from my point of view.

Thank you,

Michael A. Gyori


I do take what you mean about college professors; most of the time it's in the name of 'academic standards," and it might not occur to them (because they were not trained that way) that a student might have a cognitive processing/reading/writing LD. Happily, I've been in and out of "academics" with a significant time in community-based education where we dealt with this problem with adults who most of the time had been undiagnosed. This is a wonderful background for directing a Writing Center, where I can focus on the students. I can attempt to educate faculty, but only to a point. The fear of "dumbing down" is huge, a perceived standards decline across the board is also a problem, and not knowing how to deal with it is a third. I can also educate students to the fact that they can use strategies to adapt to the academic standards of faculty. We do have LD accommodations (as we must) but that's limited to length of time for exam, not for the degree or the pace of language-based performance.

Best,

Bonnie Odiorne

Post University, Waterbury, CT




Test Accommodations


John Sabatini and All,

A couple of assessment related disability concerns:

John Sabatini wrote:  It is probably no accident that we can read and understand what we are reading at least as fast as we can listen.

Given your statement, could you share your thoughts about why there is such resistance to allowing "readers" ( a person who reads the reading test out loud) as a test accommodation? It would seem to be moot whether you read or listen to a passage or a question on a test.

Also, you wrote:  In all our research, we generally use speed or rate as a proxy for indicating ease, automaticity, fluency, or efficiency of text processes either at the word, sentence, or passage level.

It is unfortunate that speed/rate is so often the proxy. Are there any alternatives?

Instructors will sometimes focus on simply increasing reading speed and not use reading strategies that are effective for readers who have dyslexia or another disability that affects reading.

On the administrative side, for disability advocates like me, it means a struggle to get extended time accommodations, because the reading speed gets calculated as a response time for test items which becomes the hard and fast test time. I know that some tests have liberalized the granting of accommodations on extended time in the last few years. What are your views on extended time test accommodations?

I do appreciate the profiles work you are doing at ASRP, because it seems to me to lead toward an understanding of the value of personalizing instruction for the student.

Michael Tate


Hi,

Can you please be specific? Which tests? Who is administering? What is the claim about the student to be made as a result of the test? Who will use the information? For what purpose? Who is resistant? Without knowing the answers, your questions are not easily addressed.

Best,

John Sabatini


Hi, 

I was responding to your comments as a test developer, so I would like to hear your answers in the context of the tests you have developed. I think the listserv would be interested in your answers about the tests that our students encounter: CASAS, TABE, GED which are often given by teachers and are used in the NRS as measures of student success.

I would say that there is broad belief in the ABE world that the score on a CASAS or TABE reading test that was read by a reader is not valid. That doesn't seem to matter given your comments.

It is difficult to get extended time test accommodations on the GED and TABE tests, especially since our students can't afford the costs of a diagnostic assessment. If these tests hadn't been built using speed as a proxy, it would be much easier to get extended time accommodations. 

I hope that helps.

Michael Tate


Hi,

Thanks for some clarification.

I'm sure the readers here know a lot about test accommodations and I don't think reviewing all the technical aspects is feasible here. So let me make a few points concerning the ‘speed as a proxy' issue, as well as ‘listening' for test accommodations, validity, and fairness in reading comprehension tests. At bottom is a research report from a colleague of mine on the topic.

Let me quickly clarify – reading speed or rate is typically used to assess reading fluency or word recognition fluency/efficiency. That is, we only use speed or rate when it is directly implicated by the construct. In those constructs, speed/rate is part of what we intend to measure. Generally reading comprehension tests do not set time limits for the purpose of capturing any proxy of speed as part of the construct. In our research, we DO NOT set time limits to get a speed proxy measure on overall comprehension tests that require a great deal of thinking, reasoning, inferencing, synthesizing, etc. We are not requiring individuals to think quickly. It is important to understand the difference between the constructs.

On comprehension tests, test designers typically set time limits for practical purposes. Very few stakeholders want tests that can go on forever – they are impractical and inconsiderate. Usually the time limit is set during field testing as reasonable for most students (and stakeholders), which becomes a standardized administration that makes if fair and consistent for most test takers (and a target that they can prepare for), and results in the normative numbers to have a similar interpretation across test takers. The time limits they set are generally based on the time that most test takers would typically require to complete.

Time accommodations are provided for individuals for whom the time limit seems unfair, in the sense that with more time they would be better able to show their true ability. The most common research design for testing whether and when time accommodation would help is the differential boost. As a rule, most test takers complete a test within the time limits and giving them more time does not significantly improve their performance. (In fact, one often finds that some students do improve a little, some actually get worse scores, and so the mean impact is null. For most, they just do not use the extra time, because they are already done.)

On the other hand, if you select a subpopulation with a specific disability and provide them with extra time, and their scores as a group improve, (but typical students show no significant improvement) then you have the case for a time accommodation – a differential boost. So, you have evidence that the subgroup did not have sufficient time to show their true ability. But giving extra time to everybody is a) potentially expensive, and b) changes somewhat the meaning of score scales that were administered originally under time limits. So, the accommodation is restricted to those individuals who show cause. Furthermore, if the user of the test scores made the assumption that the individual could handle comprehension of materials in a time frame consistent with the time limits, then they would be wrong for these other students who always need more time with text do perform better. If during test development, everybody is given as much time as they want, then the interpretation of scores will be the same for everybody and it wouldn't matter. It would be a burden on the administrator perhaps. So, as a rule, one has to honor the administration guidelines provided by the test maker OR one cannot interpret scales and normative scores in the same way. That would be potentially unfair to the individuals who took the test under standardized conditions. One doesn't always need the scales/norms, in which case, one can interpret as one wants – it just is not sanctioned by the validity evidence of the test producers.

Now, I'd ask you to check what I'm about to say (cause I don't have time to look right now), but none of the tests mentioned below (CASAS, TABE, GED), to the best of my knowledge, use speed as a proxy for the construct of reading comprehension (or Language Arts-Reading in the case of GED). Look carefully, but it is unlikely you will see a statement about the reading construct that says that student proficiency in reading is directly tied to their ability to complete the test in a certain time. The time limits are set for the practical reasons cited above, and therefore extra time accommodations are appropriate with cause. Also, as mentioned, however, norms and scale scores were set under the assumption of the test time limit, so to use them fairly for all, standardized procedures are followed.

Of the tests you mention, GED is relatively high stakes and individual based. That is, it is a credential recognized by employers, education institutions, etc. Users of the scores expect a passing score to signal a particular level and type of proficiency. Furthermore, there is a significant expense of time, resources, and funding to provide extra time to everyone who might ask. Consequently, extra time is regulated, but commonly awarded as appropriate.

For the TABE and CASAS, for the purposes of group NRS reporting, the stakes are relatively low for the individual, so it is better information for policy and program decisions (based on aggregated group scores) if everyone took tests under standardized conditions. When the test is used for other individual decisions, such as graduating an individual to the next level of the program, then the issue is whether the test results are useful for that purpose. If the next level for example, requires much more reading with much less time to complete between classes, then individuals who can only read well with a lot of extra time will find themselves at a disadvantage. So, in this case, one would need to not only accommodate the test session, but provide a time accommodation in the instructional program (or you are unfairly putting a burden on the individual who cannot keep up with the demands). The problem is when you provide extra time during the test, but not in the subsequent context that the test was supposed to be valid for informing.

For listening accommodations in a reading test, the issues are even more complicated. But most all of the complication surrounds the claims, purposes and uses of the test scores. One has to presume that if a ‘reader' is necessary during the test administration, then a ‘reader' will be necessary whenever the individual faces print in the environment, workplace, training, academic course work. If the individual has the ‘reader' at all times for all those circumstances, the test is a valid indicator of their performance in those settings. If the provider of the literacy environment is expected to provide accommodations to assist that individual (e.g., all electronic texts so an automatic reader is available), and that is not communicated as part of the test scores, then the results are potentially misleading. That is why the accommodation is usually reserved for certain populations with needs, not everybody.

I appreciate having had the opportunity to share some exchanges within the forum.
Thanks to John Strucker and Marie Cora for inviting me and their support.

Best,

John Sabatini

See:

Cahalan-Laitusis, C., Cook, L., Cline, F., King, T., & Sabatini, J. P. (2009). Examining the impact of audio presentation on tests of reading comprehension.  ETS Research Report.




Working Memory and Age



 

Both Johns and all: In a paper on component skills of reading and the IALSS [International Adult Literacy and Skills Survey] scores the authors (including John Strucker and others) reported a correlation coefficient of +.83, which was the highest correlation in the various components measured (vocabulary, real or pseudo-word reading, digit span). But interestingly, the latter, digit span, which some may think of as
a measure of working memory, was the next most highly correlated (+.69)with
the IALSS prose scores.

I'm wondering if this might explain, in part, why prose scores on the IALSS
(and NAAL [National Assessment of Adult Literacy]) tend to drop with age. Much research seems to indicate that working memory declines with age (after controlling for differences in education across the age span). Are there implications for the differential
assessment of reading/literacy for different age groups in these findings?

Tom Sticht


Hi Tom,

Very important point. There are going to be implications for different age groups, both cognitive and affective. Interestingly, when you examine the technical and norms manuals for standardized tests that are administered to K12 learners, they have increments for every year and sometimes every month. However, when one looks at the adult norms, they often block all adulthood (16-90) into one sample, or somewhat arbitrary blocks (18-22), or ten year blocks (20-30, 30-40). I doubt norms for every year will ever be necessary for adults; however, a single block is also not sufficient.

As you note, national surveys show a decline in literacy skills across national and international populations with age. Unfortunately, the age effect there is confounded with cohort, so we cannot separate the history effect (i.e., a smaller proportion of adults were provided quality education in the 1920s) from a decline in skills. However, the literature that does look more closely at cognitive implications of aging including longitudinal study seems to be telling a relatively consistent story. As one ages, raw speed and memory type cognitive capacities (what are often referred to as fluid intelligence) begin a
gradual decline. However, overall ability is maintained and even enhanced by the compensatory effects of knowledge and strategic management of those resources. However, that story works for individuals who acquired literacy and knowledge in their youth. Of those who are low literate at an advanced age, low skilled, and have less accumulation of knowledge to draw on to compensate, we have less knowledge. The hopeful news I've seen in the aging literature is that there seems to be capability to learn at any age (old dogs, new tricks). Just not the same learning strategies that may work with the young (who have working memory capacity and other resources to draw on).

The when's and where's of aging are going to be an important area of research and hopefully assessment tools will be adjusted accordingly. I'll throw a couple more references from the Smith and Pourchot volume on adult learning and development, but Tom, I expect you could share a few good references and a bit more discussion of the topic with the list, as I expect you have further insights.

Best,

John Sabatini

Smith, M. C., & Pourchot, T. (Eds.). (1998). Adult learning and development: Perspectives from educational psychology. Mahwah, N. J.: Lawrence Earlbaum.

Meyer, B. J. F., & Talbot, A. P. (1998). Adult age differences in reading and rememberting text and using this information to make decisions in everyday life. In M. C. Smith & T. Pourchot (Eds.), Adult learning and development: Perspectives from educational psychology (pp. 179-200). Mahwah, N. J.: Lawrence Earlbaum.

Ackerman, P. L. (1998). Adult intelligence: Sketch of a theory and applications. In M. C. Smith & T. Pourchot (Eds.), Adult learning and development: Perspectives from educational psychology (pp. 145-158). Mahwah, N. J.: Lawrence Earlbaum

.

Salthouse, T. A. (1990). Working memory as a processing resource in cognitive aging. Developmental Review, 10(1), 101-124.

Salthouse, T. A. (1991). Theoretical perspectives on cognitive aging. Hillsdale, NJ: Lawrence Earlbaum Associates.


John Sabatini and all: Thanks, John, for your comments on working memory and
literacy. I have thought about this situation with adults and am glad to see that you think some new developments in assessment and instruction might be useful for adult literacy needs assessment and the planning and conduct of instruction. Following below is a brief note I wrote in 2007 about this issue that centers on working memory (but not exclusively) and adult literacy at various ages of the lifespan. As you note, the issue of an
aging society is important, especially its effects on economics (especially health related) and global competitiveness (shrinking skilled workforce)! Any activities at ETS going on around these issues?

Tom Sticht

March 13, 2007

Fluid and Crystallized Literacy: Implications for Adult Literacy Assessment and Instruction

 

Psychometric research on intelligence over the last half century has resulted in a trend to draw a distinction between the knowledge aspect and the processing skills aspects of intelligence. Beginning in the 1940s and continuing up to the 1990s, the British psychologist, Raymond B. Cattell and various collaborators, and later many independent investigators, made the distinction between "fluid intelligence" and "crystallized
intelligence." Cattell (1983) states, "Fluid intelligence is involved in tests that have very little cultural content, whereas crystallized intelligence loads abilities that have obviously been acquired, such as verbal and numerical ability, mechanical aptitude, social skills, and so on. The age curve of these two abilities is quite different. They both increase up to the age of about 15 or 16, and slightly thereafter, to the early 20s perhaps. But thereafter fluid intelligence steadily declines whereas crystallized intelligence stays high" (p. 23).

Cognitive psychologists have re-framed the "fluid" and "crystallized" aspects of cognition into a model of a human cognitive system made-up of a long term memory which constitutes a knowledge base ("crystallized intelligence") for the person, a working memory which engages various processes ("fluid intelligence") that are going on at a given time using information picked-up from both the long term memory's knowledge base and a sensory system that picks-up information from the external world that the
person is in. Today, over thirty years of research has validated the usefulness of this simple three-part model (long term memory, working memory, sensory system) as a heuristic tool for thinking about human cognition (Healy & McNamara, 1996).

The model is important because it helps to develop a theory of literacy as information processing skills (reading as decoding printed to spoken language) and comprehension (using the knowledge base to create meaning) that can inform the development of new knowledge-based assessment tools and new approaches to adult education.

The International Adult Literacy Survey (IALS), the new Adult Literacy and Lifeskills (ALL) survey, the National Adult Literacy Survey (NALS) of 1993 and the new 2003 National Assessment of Adult Literacy (NAAL) all used "real world" tasks to assess literacy ability cross the life span from 16 to 65 and beyond. Such test items are complex information processing tasks that engage unknown mixtures of knowledge and processes. For this reason it is not clear what they assess or what their instructional implications are
(Venezky, 1992, p.4).

Sticht, Hofstetter, & Hofstetter (1996) used the simple model of the human cognitive system given above to analyze performance on the NALS. It was concluded that the NALS places large demands on working memory processes ("fluid intelligence"). The decline in fluid intelligence is what may account for some of the large declines in performance by older adults on the NALS and similar tests. To test this hypothesis, an assessment of knowledge ("crystallized intelligence") was developed and used to assess
adult's cultural knowledge of vocabulary, authors, magazines and famous people. The knowledge test was administered by telephone and each item was separate and required only a "yes" or "no" answer, keeping the load on working memory ("fluid intelligence") very low.

Both the telephone-based knowledge test scores and NALS door-to-door survey test scores were transformed to standard scores with a mean of 100 and a standard deviation of 15. The results showed clearly that younger adults did better on the NALS with its heavy emphasis on working memory processes ("fluid literacy") and older adults did better than younger adults on the knowledge base ("crystallized literacy") assessment that was given by telephone.

Consistent with the foregoing theorizing and empirical demonstration, Tamassia, Lennon, Yamamoto, & Kirsch (2007) report data from a survey of the literacy skills of adults in the Adult Education and Literacy System (AELS) of the United States. Once again they found that performance on the literacy tasks declined with increased age, that is, the higher the age of the adults, the lower their test scores became. They state that, " the
negative relationship between age and performance is consistent with findings from previous studies of adults (i.e., IALS, ALL, and NAAL; NCES 2005; OECD and Statistics Canada 2000, 2005)." They go on to say, "Explanations of these previous findings have included (a) the effects of aging on the cognitive performance of older adults, (b) younger adults having received more recent and extended schooling, and (c) the finding that fluid intelligence may decrease with age causing older adults to have
more difficulties in dealing with complex tasks (Douchemane and Fontaine 2003; OECD and Statistics Canada 2000, 2005)"(p. 107).

Strucker, Yamamoto, & Kirsch (2005) assessed short term, working memory for a sample of adults who also completed Prose and Document literacy tasks from the IALS. They found a positive relationship between performance on the working memory task and the literacy tasks, showing that adults with better short term memories performed better on the IALS. Again, this is consistent with the idea that the literacy tasks involve a complex set of skills and knowledge, including the capacity to manage information well in working memory or "fluid literacy."

Given the differences between younger and older adults on "fluid literacy" and "crystallized literacy" there is reason to question the validity of using "real world" tasks like those on the Prose, Document and Quantitative scales of the IALS, ALL, NALS, and NAAL to represent the literacy abilities of adults across the life span. In general, when
assessing the literacy of adults, it seems wise to keep in mind the differences between short term, working memory or "fluid" aspects of literacy, such as fluency in reading with its emphasis upon efficiency of processing, and the "crystallized" or long term memory, knowledge aspects of reading.

It is also important to keep in mind these differences between fluid and crystallized literacy in teaching and learning. While it is possible to teach knowledge, such as vocabulary, facts, principles, concepts, and rules (e.g., Marzano, 2004), it is not possible to directly teach fluid processing. Fluidity of information processing, such as fluency in reading, cannot be directly taught. Rather, it must be developed through extensive,
guided, practice. Though I know of no research on this theoretical framework regarding the differences between fluid and crystallized literacy and instructional practices in adult literacy programs, it can be hypothesized that all learners are likely to make much faster improvements in crystallized literacy than in fluid literacy, and this should be especially true for older learners, say those over 45 to 50 years of age.

References

Cattell, R. (1983) Intelligence and National Achievement. Washington, DC: The Cliveden Press.

Healy, A. & McNamara, D. (1996) Verbal Learning and Memory: Does the Modal
Model Still Work?
In J. Spence, J. Darley, & D. Foss (Eds.), Annual Review of Psychology, 47,143-172.

Marzano, R. J. (2004, August). Building Background Knowledge For Academic Achievement: Research On What Works In Schools. Washington, DC: Assn. For
Supervision & Curriculum.

Sticht, T., Hofstetter, & Hofstetter (1996) Assessing Adult Literacy By Telephone. Journal of Literacy Research, 28, 525-559

Strucker, J., Yamamoto, K. & Kirsch, I. (2005, May). The Relationship of the Component Skills of Reading to Performance on the International Adult Literacy Survey (IALS). Cambridge, MA: National Center for the Study of Adult Learning and Literacy.

Tamassia, C., Lennon, M., Yamamoto, K. & Kirsch, I. (2007). Adult Education in America: A First Look at Results From the Adult Education Program and Learner Surveys. Princeton, NJ: Educational Testing Service.

Venezky, R. (1992, May) Matching Literacy Testing with Social Policy: What Are the Alternatives? Philadelphia, PA: National Center on Adult Literacy.

Thomas G. Sticht

International Consultant in Adult Education

El Cajon, CA