Skip to main content

Understanding Adult Literacy Growth with Various Measures and Time Scales-Discussion Summary-Reading and Writing Skills Discussion List-Literacy Information and Communication System

Understanding Adult Literacy Growth with Various Measures and Time Scales

Discussion Summary

Discussion Announcement | Guest Facilitator

From May 2nd through May 6th, 2011, Dr. Stephen Reder hosted a discussion on the different ways that we can assess literacy growth on the Reading and Writing Skills Discussion List. The focus of his discussion broadened traditional definitions and assessments used in most adult basic education programs. From Portland State University, Dr. Stephen Reder worked on the Longitudinal Study of Adult Learning (LSAL). This study focused on 1,000 high school dropouts over about 10 years providing repeated measures to look at issues such as change in individuals’ educational and occupational goals and experiences, literacy skills, uses of literacy, social and economic status.

Thanks to Chris Miller, a graduate student at Georgia State University, the following represents a compilation of the various topics discussed by discussion list members and Dr. Stephen Reder. Each topic contains one or more discussion threads arranged by questions and answers. All of Steve’s questions and comments are labeled with his name, while questions and comments from listserv members are labeled with first and last initials. Most of the postings were copied and pasted verbatim, with a few words edited here and there to facilitate reading. For complete postings, along with author information, go to the Reading and Writing Skills 2011 Archives and look at postings between May 2nd and 6th, 2011.

Introduction

Steve: Thanks for inviting me to facilitate a discussion about different ways of measuring adult literacy growth. I’m looking forward to our dialogue this week. My perspective is rooted in the Longitudinal Study of Adult Learning (LSAL), a NCSALL (National Center for the Study of Adult Learning and Literacy) project I spent 10 years working on with Clare Strawn and other colleagues. Although LSAL has many interesting findings about a variety of adult literacy topics, this week’s discussion will focus on some key findings related to measuring adult literacy growth and program impact.

As you read these opening comments, think about how you observe literacy growth in your students or other adults you work with. What kinds of measures would capture or reflect the literacy development you see? What kinds of time scales are involved in that growth? How well do the standardized test scores that your program collects for accountability purposes capture the literacy growth you see in your students?

LSAL followed a random sample of about 1,000 high school dropouts over nine years, periodically conducting in-home interviews and literacy assessments of various types – standardized tests, component skill measures, measures of everyday literacy practices, self-reported changes in literacy, and so forth. Because LSAL tracks these multiple measures of literacy over long time periods, it provides a richer set of lenses to examine literacy growth
and program impact than we usually have to work with as practitioners, administrators or researchers. Most often our focus is limited to relatively short-term changes that can be seen over the span of months that students usually spend in our programs rather than over the span of many
years that can be seen in LSAL.

LSAL’s view of adult literacy growth and program impact is also broader because it looks at changes in both program participants and non-participants, in contrast to the more limited perspective we usually have looking only at program participants. By carefully comparing the
changing experiences and skills of those who go through programs at various points in time with those who do not, we can develop a sharper view of the long-term impact that programs have on the adults they serve.

Changes over time in LSAL adults’ functional literacy test scores (specifically, the Educational Testing Service’s (aka ETS) Test of Adult Literacy Skills) look similar to what we typically see in programs’ test score gains: modest gains over relatively short time periods such as a year. These gains are usually interpreted as program impact, but LSAL findings challenge this interpretation: comparable non-participants show similar gains over comparable time periods. Similar comparisons of literacy practices measures show something quite different: substantially larger gains in program participants than in non-participants over relatively short time periods. LSAL thus provides clear evidence of short term program impact on measures of literacy practices but not on literacy proficiency measures.

These findings are important because many programs use short-term test score gains on proficiency measures for accountability and program improvement purposes. LSAL findings indicate these proficiency measures may not be good indicators of ongoing program impact or effectiveness. For the relatively short intervals in which individuals usually participate in
programs, measures of literacy practices may be much better indicators of ongoing program impact. This fits with instructional research that shows direct connections between some classroom activities and increased engagement in literacy practices.

Does this mean standardized proficiency tests are not important in adult literacy education? They remain important. In LSAL data as elsewhere, these scores (and improvement in them over time) are closely related to adults’ employment, earnings and other social and economic outcomes. Do
LSAL findings mean that programs do not have impact on standardized proficiency measures? Not at all. Other analyses in LSAL confirm what I call “practice engagement theory”: higher levels of engagement in literacy practices lead, over time, to higher levels of literacy proficiency. In the LSAL data, it takes five to six years for increased levels of engagement in literacy practices to develop into higher levels of literacy proficiency. Over shorter intervals, students, practitioners and programs may be better served by other measures of progress for feedback, accountability and program improvement purposes.

So how do these findings relate to your experiences with students? Please kick off our discussion by telling us your role in adult literacy education and how you gauge your students’ ongoing literacy development. Do you rely on standardized test scores alone or have you developed other ways of measuring their progress?

Literary Practices

CM: I am a grad student interested in literacy issues but I don't actually work with adults. Could you give me a definition and some examples of literacy practices? Do you mean that a person may start reading the newspaper, etc.?

Steve: Thanks for your questions; CM. Examples of literacy practices include activities like reading the newspaper, checking email, or using an ATM to withdraw money. Increased engagement in these practices might involve a person starting a new practice or increasing the frequency of an existing one. We developed several measures from a battery of interview
questions about a large number of literacy practice items. We were careful to construct scales that had sound measurement properties for studying changes over time.

SM: I would appreciate ideas about how we might refine our instructional practices to enrich long-term literacy gains. Students in both our ABE and GED-level reading classes start each class with silent reading of 10-15 minutes to foster both practice, and we hope, an appreciation for the act of reading. Students choose what to read with the exception of pornographic material. They often choose a book from our extensive shelves or the daily local paper, which offers excellent national commentators. I also elicit recommendations from my younger readers so that I can keep up with the literature that students enjoy-not so much Twilight as The Hunger Games trilogy. I also develop relationships so that when I make a recommendation, students are more apt to take me up on reading a more classical/traditional HS text, such as Antigone, which we are reading this week.

Steve: Thanks, SM, for your comments and your good question about how instruction can promote increased engagement in reading and other literacy practices. There are a range of instructional models for doing this. Victoria Purcell-Gates, Erik Jacobson & Sophie Degener have conducted research showing that use of authentic literacy practices (including authentic
materials) in the classroom promotes increased engagement in literacy practices outside of the classroom. They have also produced some guides for how to go about doing this in the classroom.

DL: Over the last two years of teaching Adult Basic Education I have attempted to create an atmosphere of meaningful application of non-fiction, juxtaposed with appreciation of the scope and depth of ideas in fiction. Since the current GED test scores at a ratio of 25 percent non-fiction to 75 percent fiction, I stress to my students the need to understand application of recurring concepts in both, i.e. summarizing major ideas (non-fiction) vs. identifying plot elements (fiction), or making inferences in both realms.

I bring the newspaper to class every day and use it some, but I never make my students read aloud--I read to them, and tell them as a former newspaper reporter and radio announcer I am modeling what they need to hear so they can know the meaning of words as they read with me. I use a lot of timed readings and daily oral language to give immediate meaning to the
language in highly animated classroom presentations.

My first goal is to make students recognize the value of reading so they will have the desire to read better.

HW: Regarding “focusing on plot points in fiction”, we tried a strategy at a youth literacy program we were running outside of Vancouver, Canada that worked quite well and may work for adults as well.

We invited students to create a life map that charted the low and high points of their lives (with pictures and drawings to get the point across). We also asked them to develop a critical incident in their lives and create a sketch, a diorama or a stop motion video using clay figures. There was major excitement and lots of engagement as they throve to tell their stories.

We then used their stories to teach character, plot points, climax and denouement in faction and helped the kids see that they were the main characters in their life stories, had to deal with protagonists quite often (sometimes the police, frequently step parents), and that in spite of their young age, there were a number of events that unfolded similar to plot points in a short story or fiction (things come to a head and then get resolved, not always to the better).

Starting with the personal stories of these adolescents, made it easier for them to see how the themes in their lives (alienation, conflict, redemption) are big themes that appear in novels and short stories as well and how their lives presented the opportunity to be heroes (by speaking up for others, for example).

Steve: Thanks for sharing some of your most interesting techniques for engaging students in literacy practices. Have you had any experience trying to measure individual students' engagement in these practices? Do you think that would be feasible?

HW: In the Canadian youth program, we tried to document the various levels of engagement - paying slightly less attention to literacy practices and more to the other kind of engagement since we were hoping to connect the kids back to school and prepare them for the challenges of dealing with extended texts (textbooks, novels, etc).

How engaged a person is in learning in general or reading in particular is of course difficult to measure since these are cognitive and emotional processes difficult to observe. We asked kids to talk about what they read (at home and during silent reading) and talk with others about what they liked and didn't like about a book and what stuck with them. Since most the youth we worked with numbers of pages read became a strong indicator of starting to think of yourself as a reader (as opposed to someone who hates to read) and becoming part of what Frank Smith calls "the literacy club".

One of our little successes was to hear parents talk about the fact that they also started to read at home (a newspaper or a magazine) during the time that they kids did their required 20 minutes of free reading. So that in the end there, we also extended the time that parents spend reading.

So yes, I think it is possible to document changes in engagement with literacy but such engagement is complex and difficult and time consuming to capture. In the end, we used what we saw and what the kids reported as a basis for making instructional decision, but were not able to measure it systematically.

HSP: As an Adult literacy and education technical researcher, I'm very interested in the literacy practices people engage in on their own time, in-between attending classes, in lieu of classes, for their hobbies and families, and on the job. You've called this "self-study" out of the LSAL data. Can you say more about that and how educators can reach adults who are not in programs as well as those who are in and out?

Steve: Thanks, HSP, for bringing this up. We did find in LSAL that a surprisingly large number of adults are involved in "self study" efforts to improve their reading, writing or math skills and/or prepare for the GED. Many adults do this who never attend formal adult education programs, others engage in such self-study between periods of participating in programs, some do both at once. We saw flows over time in both directions between students in self-study and formal programs. By expanding the definition of participation in basic skills development, programs could include periods of self-study and increase their outreach and their students' persistence. Although we expected low-skill-level students to be less likely to engage in such self-study to improve their basic skills, we found just the opposite: low-skilled students were as likely (if not more likely) to engage in such self-study.

DR: Steve, I am interested in literacy practices related to online reading and writing tasks such as online job applications; driver license applications; paying bills online; and searching for, judging the quality of, and interpreting health information. Do we know what kinds of online reading and writing LSAL participants may have participated in? Do we know what percentage of participants' reading time was spent reading online? What else can the LSAL tell us about online literacy practices?

Colleagues, if you are teaching, do you know what kinds of online documents your students are reading or writing? Are they asking you for help (in finding, judging the worth of, or interpreting) these documents, to complete online forms? Do you think teaching how to
read and write online is now part of a literacy or basic skills teacher's job?

Steve: Great questions, DR. LSAL has a fair amount of information about online literacy practices. LSAL followed individuals between 1998-2007. At the beginning of this period, computer & internet users were uncommon in our adult education target population, but those rates grew steadily over time. By the end of the study, the majority of the population we were following had become computer and internet users. There are some publications on the LSAL
website that show these trends and how closely intertwined offline and online literacy practices are. A couple of the publications look at the interrelationship among employment, print and online literacy practices, and as you might expect, there are clear relationships there. LSAL has quite a bit of information about the extent of specific online literacy practices such as reading and writing email and other materials. We don't really have good information about the percentage of time spent on these activities, however, which is one of the things you're asking about. That would be nice to know.

When we analyze literacy growth over time, programs have a very strong impact on the emergence of “new” (to the individual) reading, writing and numeracy practices. Individuals learn how to do new things with text as they go through programs. I think this is very important in understanding the long-term impact that our programs have. In addition to the strong quantitative evidence about this impact, there is qualitative information available as well from examples individuals gave of new things they were doing with reading, writing and math. These examples include, of course, online activities. Unfortunately, we don't have detailed information about the specifics of the curriculum they experienced in programs.

KM: I teach Adult Basic Education in the south Bronx and many of my students are involved in "self-study" efforts. They range from students teaching themselves math out of high school textbooks to students working with college writing books. Many of my students bring these books in and I love it because I can incorporate exercises from them into the curriculum. My favorite "self-study" activity was one where a student would watch television with closed captioning so that he could see how unfamiliar and familiar words were used and spelled. I have another students who told me yesterday that he downloaded kindle onto his Iphone and had read "War and Peace" over the weekend. Seriously. He pointed out that the text shown on his Iphone looked really manageable and before he knew it, he had read an entire book.

Learning Plans

CT: I work as a teacher of both Adult Literacy and ESOL, some ESOL students also having more general literacy needs, for a large urban adult education service in the United Kingdom (UK). For about 10 years now, we have had the Skills for Life programme for literacy, ESOL, numeracy, and IT, as you may know.

The parts of the literacy programme relevant to this discussion are first, the assessment part of the national curriculum for adults and second, the possibility of using learning plans and informal assessments to record literacy practices. I should say that there are plans to replace these curricula with “functional literacy” and “functional numeracy” which has been introduced as embedded components in some secondary education.

First, I should say that mass paper-based testing has been part of the UK education system at secondary level and later even at primary level. Following from this, when the national curricula for adults were created, existing exam boards were invited to produce exams for the 5 different levels for literacy, numeracy and ESOL. Some boards relied on teacher assessment with external verification and some were externally administered. I am less familiar with the lower levels of literacy than I am with higher levels of literacy and all levels of ESOL. However, I can say that the exams were always meant to be primarily practical and while spelling and grammar might be tested, students would not be asked questions about grammar, only questions that ask them to use grammar correctly. The first three levels are assessed on reading, writing, listening and speaking. A typical reading test involves reading a number of texts chosen to be appropriate for the level and answering multiple choice or fill-in-the-gap questions. These may be comprehension, sequencing, matching words for meaning, choosing the correct spelling or the correct grammar, and so on.

At the two higher levels, which are meant to be something like high school but not college prep (going back to my own school days in the United States), the tests are 40 multiple-choice questions which are also intended to cover many different aspects of practical literacy such as comprehension, grammar, spelling and vocabulary. This exam can also be taken as a stand-alone online exam without joining a class. Writing is not tested at this level of adult literacy and my colleagues and I do not consider these good assessments. For those with some familiarity with the UK system, Level 1 literacy is meant to be equivalent to grades D-F GCSE English and Level 2 is meant to be equivalent to grades A-C, although in fact the curricula and exams are not at all the same.

One problem with adult education is that it is often up to the student to keep their exam results and certificates and take them from one place to another and some adults wind up with very haphazard collections of certificates. At the time I went on sick leave, adult students were supposed to be receiving unique numbers which would facilitate transferable records, but I believe this system is not really in effect yet.

Since the Skills for Life programme and the national curricula for adults were brought in, the emphasis has moved from informal, individual assessment by the teacher to paper or computer based external assessment, but at the same time, teachers have been asked to carry out diagnostic assessments and to compile detailed, sometimes very detailed Individual Learning Plans. We have recently been allowed to indicate class learning plans, again based on diagnostic assessments and interviews, with individual targets added to this.

This is the point at which, potentially, literacy activities could be indicated as targets, and they sometimes are, particularly in literacy classes and particularly for adults with special learning needs. So a target might be to choose something in a local newspaper, read it and bring it to the next class to talk about or write about. It might be visiting the library regularly to choose books to read and report on, or picking out a recipe on the internet and explaining to the teacher or to the class how you would follow that recipe. The teacher and the student would negotiate these. But this is not required and no one, so far as I know, is collecting data.

The problem, as with the unsystematic approach to exam records with students who may move around a lot, is how these records could be built up over time, either for the individual student, or for institutions or researchers. I usually wind up with a large number of completed ILPs at the end of the year which in theory are both retained for checking by managers and at the same time returned to the students. We do not keep them in electronic form, although many institutions do. Unless I have an opportunity to return them to students later and I may not see them again, or unless a student asks in which case I give them the original and keep a photocopy, all this paper is simply shredded at the beginning of the next year. I am not sure how long a community (Further Education) college would retain electronic records or whether they would be transmitted to another provider.

Two bodies which are very much concerned with this and related issues in the UK are NIACE and NRDC. Both have carried out research and published reports, many of which are available online. I know a study was carried out into Workplace Literacy programmes, and I have a feeling that it produced similar results: that participants might not have taken and passed exams, but that they were using their skills more in their work and personal lives. I can look up the reference in the next day or so if someone would like it.

I would be very interested to know how something like an ILP approach could be systematised and collated to provide information and monitoring on a wide level, and perhaps to provide an individual with a useful record of progress, however off and on any education or training might be.

Steve: Thanks, CT, for your very interesting comments about adult education and the Skills for Life program in the UK. I (and perhaps others following this discussion) would be interested in to hearing more about the individual learning plans (ILPs) that you describe -- is there a way to share a sample? I like the idea of being able to add teacher-made diagnostic assessments student work samples (that is demonstrations of what students are able to do) to the more formal standardized assessments being used. Doing this systematically and in a transferable way is definitely a challenge for many programs. Have others on the list seen or tried things like this?

One of the things we saw in LSAL was that adults with basic skill needs often have educational and occupational goals but what they lack are realistic plans to follow to move from where they are to where they want to be. So we have been experimenting with a learning support system called Learner Web that tries to provide visible learning plans for adults to follow, plans that their teachers, tutors, mentors and others who may work with them can also view (with permission, of course). The idea is that these learning plans move with the learner as she or he participates in programs, utilizes various support services and even works independently. Learning plans are organized into steps, and assessment results and work samples can be stored and accessed on the system for the learner (and those who help the learner). This sounds similar in conception to the paper-based system you described in the UK. Do others have experience with systems like this, whether paper-based or online?

CT: More from the UK, in response to your questions, Steve. Below is the content (1 page) for an Individual Learning Plan for approximately 3 months of learning. This would be repeated for every 3 month period. Many of us combine the review of on period with the setting of targets for the next. As you see there are whole class targets, which are often a mixture of exam preparation and more general work as negotiated. Both the group and individual targets could include literacy practices so long as they relate to the National Curriculum, assessed needs or requests by students. This could include almost anything that involves reading or writing at any level. At the moment, this is all paper-based, and the volume of students mean that this information is not collected systematically for individuals or for groups. The teachers do get pdf files and Word files, so in theory these could be stored electronically. On the whole we have to do this work at home on our personal computers, which of course is not ideal in terms of security and confidentiality. These are supposed to be written as 'SMART' targets, which is a separate issue I won't rant about at this time.

We are told to keep some indication of initial and diagnostic assessments, but how is not specified and is not part of this form. There is an electronic version of this which generates a report of level and needs, but we have never had the facilities of using this, so ours are paper-based. I keep all this with the ILPs so that students have access to their own assessments. (By the end of the year most students have 2 or more over-filled poll pockets in the class folder.) I also carry out diagnostic assessments not included in the National Curriculum, especially for the NNES who make up most of my classes, which I take from EFL textbooks and similar sources to give me a better picture of their existing knowledge, but these don't really come under the literacy practices heading. I use small group discussions to find out about students' existing literacy practices (and some are highly literate in L1) along with other skill areas and what they want to be able to do; this goes on a IWB or flipchart paper and then the whole class can discuss these and perhaps find common areas for work. We have no separate interview or tutorial time, but while paper-based activities or discussions are underway, I interview students about their diagnostics and individual needs. The majority want to transfer L1 skills to English in terms of reading, but some find writing much more daunting, often not so much because of grammar and spelling, but because writing is a prestige activity in the societies from which they come and so they find it extremely stressful.

  • Individual Learning Plan & student record
    Course/Group Targets for Term 1 Term 1 Review (evidence of progress)
  • Do you have any additional targets connected
    to this course? If so, please list below: Term 1 Review (evidence of progress)
  • Is there anything which you think will make it difficult
    for you to achieve the group targets? Please list below: Term 1 Review (evidence of progress)
  • Was additional support received, if required? YES / NO End of Term 1 review
  • How confident do you feel about the subject now on a scale of 1 - 10
  • How do you feel about your progress?

Learner Teacher
Signature Signature Date


There is a summative review for the end of the academic year.

The curricula and diagnostic materials I have referred to can be found at this site. In addition, 'Stick with It' is a set of materials intended to help adults in less formal learning contexts, to learn how to set targets for themselves and break them down into manageable steps. This was developed a few years ago; I am not sure how widely it is used, but I have adapted some of the study skills questions and some “what if?” questions for my students. For example: What should I do if...I become pregnant, I am ill or someone at home is ill, I am going on an extended visit for family reasons, my shift changes, I can't get to class one day. Some information, such as who to notify and the telephone number, is given and we discuss other solutions, such as being sent work by mail or electronically or via a friend/family member, rejoining the same or another class at a later time, possibly taking an exam early, and so forth.

'Half of those learners who are going to withdraw from any literacy, numeracy or ESOL course will have done so by the time the course is 30% completed.' Skills for Life, the ILR dataset. The materials are a mixture of information and guidance for teachers and programme managers, and some for students or potential students. An example they give, at least in my original materials, is of a trade union official who will need to give oral presentations as part of his new union position, but who doesn't know how to prepare one. (An interesting example of a literacy practice?) He might decide that he needs to extract important information from the large amount of information he receives, to record this information in some way, and then to use it to make notes to speak from or (now) to create a PowerPoint presentation. He would probably want an opportunity to practice delivering a presentation in class. Many of our students want to gain the skills and paper qualifications that will get them into work-related courses leading to other qualifications, sometimes from a fairly low level. The local area has lost the majority of its traditional light and heavy industry and every big factory closure brings a surge of students who know they need additional skills or proof that they have those skills to get another job.

DL: Upon completion of the TABE assessment, I print each student's "Individual Profile" report, give a copy to the student, and go over it with them. Each performance objective is articulated in that report and the student can see specifically what needs to be worked on to reach their own goals.

I am very hesitant to create any teacher-driven "Learning Plan" because it is the individual student's responsibility to take the initiative, not up to me to modify behavior. I also believe in natural consequences and the reality that students who have made the choice to get a GED must also be responsible for necessary follow-through.

By stating my expectations from the first day of class, and building collaborative learning teams, students assure their own success in a supportive environment. Sure, they have other requirements in their lives; home, family, work, but by focusing only on what I am trained to help them do, I ultimately help them prioritize what they do daily to reach their own goals.

Steve: Those are great techniques, DL. Thanks for sharing them with us. The learning plans I was referring to are not teacher-driven. I agree with you, in most cases that are not appropriate or effective with adult learners. The plans instead need to be learner-driven, and sometimes teacher-supported, sometimes tutor-supported, sometimes counselor-supported (depending on their
design & implementation). We saw many adults in need of such plans but wanting to be self-directed. By allowing teachers, tutors, sometimes case workers and others to interact with adults as they follow their plans (with permission), the plans can span multiple environments in which adults learn and break down the fragmentation and discontinuity learners experience among the various programs and services over time. This responds to another issue we often saw in LSAL: fragmented patterns of participation in adult education programs, workplace training, social services, and so forth. Too often it was up to the individual adult to piece things together and coordinates what was going on, that is, figure out how to navigate the system and piece
services together into coherent plans to follow.

DL: Using a Constructivist model, I urge my students to take control of their own circumstance from the ground up and from day 1. Every day of class they miss represents some number of questions they will not be able to answer in the GED, so no skin off my nose if they have to keep coming to class for years to come.

I cannot discuss their educational needs or progress without a release, and I leave the job of getting a release to them in cases where they have probation officers or others who must know their progress. In past experience as a teacher in public education it seemed to me that the whole structure was much more concerned with promoting the employment interests of all the administrators, teachers, specialists, aides and general do-gooders, than to empower students to face the reality that as they fall behind others will realize the opportunities they miss.

Daily, I present myself as a role model of someone who is confident, has been successful in a number of endeavors, is happy with my life, and is seriously interested in helping them help themselves. I have students who have been with me for years, because they have recurring "life events," but they seem to keep coming back. I had a load of nearly 70 students last semester. Some even come back after they have graduated and make a point of announcing to my classes how appreciative they are for what we did together to prepare for their GED.

So the bottom line for me, Steve, is I don't really want to be a part of the network of folks who all build a "Learning Plan" to do something. They can do that, of course, but I really just want my students to do what THEY must do. As their teacher I attempt to establish a meaningful working relationship as a resource person, but it is ultimately up to them individually to do what they must do, and I freely admit that not all people are going to be able to get a GED. That's life.

Steve: Thanks, DL, for your forthright comments about your preference to act as an independent constructivist teacher. I really appreciate your perspective. As I hope was clear in my comments, many of the adults we talked with in our study wished -- often in retrospect -- that they had been given a plan to follow as adults as they worked with teachers, case workers or others, who had helped them in different ways over the years. I was frankly very surprised to hear so many adults say they wanted what some professionals call "case management" (a term I naturally shy away from as a teacher). Without it, they experienced problems related to fragmentation rather than coordination of services as they moved from one program or service to another. Although many could and did figure out how to put things together "on their own", many others did not and wanted help with that. If adults were able to stay in a single classroom or program long enough to reach their goals, there would be much less need for this I imagine. But many have longer term learning goals and may find a plan to follow very useful. There's an impressive amount of diversity out there, and different approaches no doubt are more effective with some adults than with others.

Assessment in the Classroom

DG: Thanks to everyone for a lively discussion so far. Steve has described that in the LSAL study, literacy practices play an important role both as outcomes from literacy programs and as possible contributors to future literacy proficiency gains. I am wondering how this translates into actual practice in the adult literacy classroom. I would love to hear from the teachers and administrators who are on this list. Do you measure literacy practices (or other things) in addition to your standardized test scores? If yes, what do you measure? If not, what would you like to measure (assuming the ideal world and you were permitted to)?

RH: This is one area my program team and I are working on right now. Over the past couple of years, we have used a survey to see how many of our students need to/actually do different things, but we haven't been able to develop a way that we can consistently measure these types of practices with each of our students on a regular basis...and then get the info into a format that we can then use for reporting purposes. I think that showing changes over time on different activities like these can really show the effectiveness and necessity of adult literacy programs. It also helps us know which functional literacy activities we should include in our classes.

Some of the things we try to capture now, but would like to figure out how to measure more consistently with our students are: have a library card, know how to use a library to find materials for self or children, go to a library to check out books for themselves or for their
children; go to a bookstore to look at books or purchase books for the home; communicate with their children's teacher by written letter or email or in person; read to their children, help their children with homework, schedule consistent homework times; are registered to vote, get registered to vote, and actually vote; read a book for the first time, read books as a part of their daily routine; read a newspaper or check online news sites to keep up with current events;
use email to stay in touch with others or for business purposes; learn to type; use texting as a way to communicate with others; write letters by email or by hand (thank-you letters, letters to community reps. etc); use a calendar for appointments; make a use to-do lists; get a driver's license; volunteer in the community.... and there are some others.

I am very interested in knowing if others measure things like I have listed above-- how you capture the info and how you use it etc.

LH: English for New Bostonians is a funder of adult ESOL programs. In the area of ESOL literacy, we are very interested in measuring not only learner gains in reading and writing skills, but increased independence in students' lives outside the class. Examples would be taking the bus by oneself, shopping independently, going to the laundromat, etc. We may do a pre/post learner survey, but more likely we will do post-survey only, asking learners whether they do a particular thing more than they did before they started the class. The reason we're considering post-survey only is that it is challenging to keep up with the pre-surveys in an open enrollment, part time program, especially when literacy-level learners need individual help completing the survey. RH, what this would look like in your case would be asking at the end, did you get a library card during your time in the program?

The question we are grappling with is what to put on the list? We would like to track just a few things which are indicative of greater independence in general--these are not learners who could wade through a multi-page form with lots of different indicators, even with pictures and help. If there are only a few things on the list, though, you have the potential problem of teaching to the test (teacher/program director says, well, whatever else they learn in class, they're darn well going to learn how to do laundry, because that's what our funder cares about--obviously that's not what we're looking for!)

RH: I agree with the logistical issues LH mentions (the time it takes to do the surveys because they need to be done individually, different starting/ending points for students). I like the idea of doing a post survey only, and I am wondering if anyone has any thoughts about doing a pre/post versus just a post. One thing we have done over the past few semesters is to administer a short survey to our students around the midpoint of the semester in order to capture a snapshot of who are students are in terms of the number looking for jobs, using the library, have a chronic illness, etc. These were anonymous surveys, though, because of the question we were asking. Now we want to capture the information for each student as part of a student portfolio system we are implementing. Any thoughts or suggestions are very welcomed!

DR: Many years ago Dr. Ron Solarzano, then at Educational Testing Service, worked with the California State Library System to develop a simple reading and writing habits assessment instrument that could be used by learners and their library literacy tutors. It was known as the California Adult Learner Progress Evaluation Process (CALPEP). I like the simplicity and ease of use of the CALPEP. I don't think it would be hard to collect and aggregate this kind of data. If the data were collected monthly, for example, over a period of several months one could see changes for individual students, and perhaps for a class or program. The instrument includes two simple three-point scales, one for frequency: Not at All, Sometimes and Regularly; and one for reading or writing difficulty: Easy to Read/Write, A Little Hard, and Very Hard.

RH, your list of “indicators” of functional literacy is great. I especially like that it includes increasingly essential digital and online reading and writing tasks.

RH: DR, thanks so much for sharing the CALPEP survey! I like that it has the two different scales (frequency and difficulty). It seems simple to use, and I think this is a really good model to which I could add other items, so I am going to share it with my program team for their input. I agree with you that it wouldn't be all that difficult to collect this sort of data for our students (maybe pre/post), but I am leery that we would end up with a stack of data that we do not have the staff capacity or skills to put it into a usable format for our program. Do you (or others) have any suggestions on how we could create a database that we could use to inform our program-- one that is sustainable in the sense that we, like many ABE programs, have limited staff... and whose staff may have very little knowledge of data collection and appropriate analysis methods?

TS: In a project I worked on with Wider Opportunities for Women we measured pre-and post-changes in various behavioral activities that the women in basic skills programs performed with their children. For instance, one activity was how often do you Talk to Child about School. Students rated how often they did this: Never, 1 or 2 a Year, 1 or 2 a month, 1 per week, 2-3 per week, every day. Associated with each of these activities was a scale score of Never=0, 1 or 2 a year=1, 1or 2 a month=3, etc. We then averaged the scale scores across activities that also included how often do you: Help with homework, read to or with child, take child to library, go to school activities, help with school activities, talk with child’s teacher. Analyses showed statistically reliable gains in each activity from pre- to post-assessment.

In the research with Wider Opportunities for Women, Sandra Van Fossen and I found that mothers enrolled in basic skills programs reported that they spoke more with their children about school, they read to them more, they took them to the library more, and so forth. We validated these responses by visits to some of the women's homes and interviewed their children. In one visit to a single mother's home, the mother's second grader said, "I do my homework just like Mommy" and thrust his homework into the researcher's hand. This and other validating data confirmed activities like help more with homework, reads more with children, etc. Importantly, this type of emotional, noncognitive development as expressed by the child was obtained for free as a spin-off of an adult basic education program.

Steve: Great discussion and examples of literacy practices measures! TS, DR, & LH provided some very nice examples of how programs use literacy practices to assess student progress. RS and AP asked for more specifics about how this kind of information can be developed into effective measurement scales. The traditional approach is to have many specific tasks, in a parallel format, which respondents are asked about -- the CALPEP, New Bostonian and WOW approaches that were described all seem to follow this approach. The items can ask about whether individuals do particular tasks, or how often they do them, or how easily the do them, and so forth. Responses to the various items are then scaled using statistical models that look at how responses to particular items are inter-related and identify one or more underlying dimensions, factors or subscales (depending on the statistical model). Individuals are then assigned scale scores or factor scores, etc., on each underlying measure. We used this approach in LSAL with items that measure how often individual do particular tasks like reading a book, using an ATM machine, reading & writing email, etc. (never/occasionally/once a week/several times a week/every day). From these kinds of item in LSAL we identified two underlying dimensions of practices that we called literacy and numeracy. If we were interested in just taking a snapshot of practices at any one point in time, most of our items fit onto one of these scales (only a few seemed to be outliers). Usually the more items one has underlying a scale constructed this way, the more reliable the measurement is.

When we wanted to use repeated measurement across multiple time points (needed for measuring growth or progress), it became trickier. Statistical techniques are needed to make sure the measures are stable over time, that is, measure the same construct each time. This is a much more demanding enterprise and we had to discard quite a few of our individual practices items in order to wind up with stable literacy and numeracy practices measures. It appeared that the longitudinally stable measures are comprised of items representing fairly stable tasks (reading books, for example), with historically changing tasks falling off the scale. Why do we need to be careful about this? An example may help illustrate the challenge here. Consider practices items affected by technological change: The use of new tools and techniques may not reflect the same construct at each point in time and thus these practices might not be suitable for scales designed to measure changing levels of individuals' skills. As I noted earlier, if we're going to start using literacy practices measures, we need to develop ones that have measurement properties well suited to the ways we want to use them. I think this is eminently doable starting with the kinds of items LSAL used or starting with the ones that others have described here. I'd be glad to help anyone who wants to undertake this.

LSAL also asked individuals in each interview to report directly about changes in their reading, writing and math practices since the previous interview. For each skill, if individuals reported changes, they were asked to report on whether they read (for example) less often, about as often, or more often than they did at the previous interview. Another item then asked if they read with less skill, the same skill or more skill than they did previously. Another follow-up item asked if they read different kinds of materials than at the previous interview. If so, they were then asked to give (in open-ended format) examples. The same questions were asked for writing and math.

AP asked about validating such measures. We validated them in LSAL against the standardized test scores we obtained. Although they are closely related, the two types of measures have quite different dynamics of change across the lifespan, which was of course the starting point for our discussion this week.

JB for more details about the self-study measured in LSAL. When individuals indicated they had self-studied, we followed up and asked them about which basic skills they had worked on and about the materials they had used. We also probed, for those individuals who were also in adult literacy classes during the same time period, to differentiate self-study from teacher-assigned homework. More than half of those who self-studied used workbooks of various types, focused on such skills as spelling, writing, math, GED preparation, and so forth. Details are available in the publications on the LSAL website (www.lsal.pdx.edu). About one-third used computers for self-study. DR asked about the extent of internet use in self-study. Unfortunately, we did not distinguish internet use from computer use in our self-study questions. This was a byproduct of our starting to ask about self-study in 1998 when internet use was still quite uncommon in our population. Even though internet use became much more common over time, we wanted to keep our key questions the same so that we could make longitudinal comparisons. We do have quite a bit of information about internet use just not specifically for self-study to improve basic skills or prepare for the GED.

HSP: I wanted to throw in the NAAL data - this was nationally representative data of adult literacy proficiency (not classroom performance) that showed that as literacy proficiency rises, so does voting; volunteering in the community; employment; reading to their children/helping with schoolwork; and, being involved in their children's school.

Additionally, secondary analysis of this national NAAL data done for NIFL showed Internet use, what we called "technology users", rose with literacy proficiency. That's in the report, Investigating the Language and Literacy Skills Required for Independent Online Learning, which I wrote with Stephen Reder.

A Technology User was defined as someone who lived in a house with an Internet-connected computer or used the Internet at least once per week to find information on public events or news or sent or received an e-mail at least once per week. This is a pretty low bar for tech use, but it seemed to be the distinguishing factors for adults in 2003 when the data was collected.

I add all this so that those of you collecting all sorts of interesting data can think about which pieces are really critical and can be backed up by larger data sets, and which questions are interesting locally but may not need to be collected from each student.

Steve: Thanks for bringing in the national data, HSP. These relationships between literacy proficiency and other social and economic indicators are enduring - they were present in both the 2003 NAAL data you mentioned as well as in its earlier counterpart, the 1992 NALS data. I mention the NALS data because Jan Sheehan-Holt and Cecil Smith did an interesting analysis of the apparent impact of adult basic skills programs on both proficiency and practices measures in the NALS data set. They compared the proficiencies and practices of individuals who had participated in a basic skills program within the previous year and individuals who had not, controlling statistically for other differences between the groups (years of schooling, age, language background, etc.). They found no difference in proficiency between comparable groups that had recently gone through programs and ones that had not. But they did find significant differences in reading practices between the two groups. This brings us back to where we started the discussion this week, the idea that proficiency gains are not the best way to assess short-term program impacts for accountability or program improvement purposes.

I hope RH and others who may start experimenting with the use of tools like CALPEP will keep the group informed of how their efforts go and the problems they may run into, practically, administratively and perhaps politically, in trying to introduce literacy practices measures into programs. There is an impressive amount of expertise in this list and I'm sure many of us will be eager to help these efforts blossom.

TS: HSP has called attention to the NAAL data on literacy proficiency and literacy practices. Another source is a report that William Armstrong and I prepared entitled Adult Literacy in the United States: a Compendium of Quantitative Data with Interpretive Comments. This report presents a developmental theory of literacy and history of and sample items from standardized tests in the U.S. including military tests from World War I to 1990s and all mass literacy tests for adults from 1930s to the National Adult Literacy Survey (NALS) of 1993, which is similar to the NAAL of 2003. It presents data on relationships of parents’ education to the literacy of their children; relationships of adult literacy to occupations; and samples of pre- and post-test gains for over 30 programs.

There are data on longitudinal growth curves for some programs in which adults stayed for up to three years. There are data from the 1930s to the 1993 NALS, forerunner to the NAAL, showing relationships among literacy proficiency, literacy practices, and years of education, what I call the “triple helix” of literacy development.

Generally speaking, we have known about relationships of literacy proficiency to occupation, income, literacy practices, years of education, parent’s education level, and persistence in programs from some 60 to 100 years. Newer data from England, not in this report, show that parent’s measured literacy skills, holding education level and economic data constant, directly correlate to children’s literacy achievement. I think this is useful for arguing that investments in adult literacy education can have an intergenerational payoff. This is a program outcome that we do not presently use in the National Reporting System.

JRS: For our research study, the Relative Effectiveness of Adult Literacy Intervention Programs (Sabatini, Shore, Holtzman & Scarborough, 2011), we worked to measure changes in literacy practices and perceptions over time in individual reading tutorial settings. We had students, who were randomly assigned to various reading interventions, respond to periodic surveys regarding how often and how well they thought they read various types of texts-from online reading to magazines and newspapers. We also tracked their views through daily teacher journals in which teachers could note changes in practices or perceptions they saw or heard from their students when they met two to three times every week for 4-6 months.

We found that while there were some differences that may be attributed to the reading intervention students were assigned, generally, with one-on-one tutorials, students began to read more and perceive themselves as better readers (even if the immediate standardized test results did not demonstrate this). We hope to have results published soon!

JI: Forgive me if I've missed this question, if it's been posed already, but how is it possible to isolate one variable (such as a reading strategy/instructional approach) from the myriad other impacts of life on adult learners? How/ are these perceptions factored in?

JRS: I appreciate your question! Perhaps this answer is more relevant to research. However, for our work, the study was designed so that tutorials supplemented regular instruction in adult education. All those receiving tutorials were ALSO in adult education courses. With this arrangement, we were able to compare those who were receiving one-on-one tutorial with those who were simply attending regular adult education courses of the same literacy levels and backgrounds.

Of course, this is not perfect and there are still the 'myriad of life circumstances' that may affect learners in a variety of ways, so we just can say that outcomes suggest that perhaps certain factors may be attributed to specific reading approaches, or simply to one on one attention in tutorials, or additional focused reading practice. There is certainly more work to be done in this area, with even larger numbers of learners.

I am not sure if this answers your question, but hope it's helpful in understanding the sort of results we're working to make meaningful now.

Steve: Thanks, JRS, for sharing this. The pattern of differences you describe in your note between practices and test scores sounds somewhat analogous to the differences in LSAL between program impact on proficiency and literacy practices measures.

JAF: Hi JRS, how many students were beginning readers, and what interventions were used with them?

JRS: These were all individuals reading at the 2nd to 6th GE (various measures to determine). And we used three different interventions, one that focused more on fluency, one on decoding, and, one that was a mix of the two.

Measurement Issues

Steve: Have others had experience trying to systematically measure changes in their students' literacy practices over time? How have you done this?

SM: Measuring systematic changes at our adult education center is difficult given the ebb and flow of students' attendance, ability levels, interests, and job and life pressures, so unfortunately, assessment still rests with the TABE and our anecdotal/observational skills.

DL: In our program we use the TABE exclusively, and the Steck-Vaughn "TABE fundamentals" resource books when students are close to the level they need to be in order to take the official practice tests. They are required to take the TABE upon enrollment and at periodic points upon completion of specific numbers of hours of class time (40-60).

I explain the concept of "scaffolding" to my students, beginning with simple concepts and expanding upon those in easy steps. I take the material slowly; we share a concept at the beginning of a class and then build on that skill over the rest of the period, using text or other reading materials. I urge them to "make lots of mistakes, so you don't have to make them on the test" and model strategies to arrive at correct answers on sample GED questions. I encourage them to look at the answers at any time and we discuss how the designers of the GED test determine what questions to put in the test and why those certain questions are asked. The ultimate assessment for my students is obtaining their GED and when any of them obtains that important goal we all celebrate it!

Steve: We developed several measures from a battery of interview questions about a large number of literacy practice items. We were careful to construct scales that had sound measurement properties for studying changes over time. More information and technical details are available in some of the suggested readings as well as on our project website.

The TALS proficiency instruments we used were developed by the Educational Testing Service as functional literacy measures -- intended to assess adults' abilities to perform simulated real world literacy tasks, rather than as tests of specific subskills like spelling. They involve reading and using information from functional displays like maps, charts, written instructions, and answering questions about those tasks through constructed responses (rather than multiple choice formats). There is relatively little writing involved.

SM: It would be terrific if the LSAL results would be/could be communicated to the assessment gurus so that not so very much would ride on our short-term assessments in adult education-CASAS, TABE, GAIN- and I would appreciate ideas about how we might refine our instructional practices to enrich these long-term literacy gains.

Steve: Absolutely this is difficult to do in real programmatic time, just as standardized testing is. LSAL had the luxury of collecting data and assessments directly from individuals without having to rely on or trouble the programs they may have attended. The measures we developed were appropriate for our research purposes, but other tools need to be developed for systematic use in programs. Although what we did could be a starting point for developing such measures, there's a long way to go. The essential point here is that the intrusive and time-intensive standardized tests used to measure student progress appear “not” to be helping programs put their best foot forward for accountability and program improvement purposes. Other measures are needed to fully reflect the impact that programs are having on the adult learning process.

JAF: We have been struggling with finding acceptable means of measuring the progress of beginning readers. A colleague here in Philadelphia has assembled an assessment battery comprised of parts of the DIBELS and the Dolch word lists. We administer it individually, but it doesn't take long. We are piloting it with our beginning adult reader’s instructional model, and it looks quite promising. You can see our model on YouTube demonstrated in four parts.

KM: I just took a professional development workshop at the Literacy Assistance Center in Manhattan that was all about individual assessment for adults that would provide more specific information about where to start teaching rather than where a student fell according to grade placement. I believe that it would be a good way of assessing growth as well./p

DB: For those of you interested in quick assessments of literacy skills, this was a recent topic of interest discussed on the Workforce list. I thought you might be interested in looking at our contributions that have been preserved in the archives of our list and the posts on this topic are #s 180 - 207. Hope this is helpful!

RH: I am a graduate student interested in adult literacy issues, and I am also the Lead Teacher at an adult literacy agency. Can you tell us a little more about the participants in terms of their initial scores for reading etc.? Were the participants higher level readers, or were there participant’s at all different reading levels? How were the reading levels measured/defined? Did you do any group comparisons in terms of literacy growth or increase in literacy practices based on reading levels? If so, what were the results?

Steve: Great questions, RH, thanks. Participants' initial literacy levels were broadly distributed. Many were fairly proficient but others were not. Their proficiency levels were defined by their scores (0-500) on the TALS Document Literacy scale. There are specific numbers and graphs available in some of the publications on the website if you want details. Let me give you an example here: about 1 in 4 had proficiency levels that were at the "Below Basic" or "Basic" levels defined in the National Assessment of Adult Literacy (2003).

We used a technique for analyzing growth called, surprisingly enough, growth curve modeling. Initial proficiency level per se did not affect the rate of growth we observed. Two things that did affect overall growth rate for proficiency were age (the proficiency of younger adults tends to grow more quickly than that of older adults, regardless of their starting levels) and immigration status (adults not born in the U.S. tend to acquire proficiency more quickly than otherwise comparable adults born in the U.S.). I believe the reason immigrants' proficiency growth rates are higher has to do with their ongoing acquisition or cultural and linguistic knowledge that the functional assessment tasks draw on in addition to literacy skills.

Literacy practices measures, although strongly correlated with proficiency levels, show different dynamics of change. Not only are they sensitive to participation in programs, they are sensitive to life history events, things like the birth of a child, a change in jobs, in marital status, and other events that likely change individuals' engagements in various kinds of literacy practices. These events can be seen as "bumps" on the growth curves, superimposed over ongoing development processes.

AP: Do the data tell us if those changes in practices are better considered as bumps or as growth mechanisms/engines of change?

Earlier, Steve, you wrote that "Other analyses in LSAL confirm what I call “practice engagement theory”: higher levels of engagement in literacy practices lead, over time, to higher levels of literacy proficiency" ... Which seems to indicate practice is a driver/accelerator of proficiency, not a “superimposed bump” in the way.

Steve: Thanks for your questions, AP. Life history events and participation in programs show up as "bumps" superimposed (shortly after occurring) on growth curves for literacy practices measures. The literacy practices themselves are related in an extended way (more as a driver than as a bump, as you suggest) to long-term growth in proficiency. Diagrams and statistics illustrating these relationships are available in the publications.

AP: Thanks Steve for your informative replies. I think these are all very important distinctions/themes folks are asking about in the discussion re measurement and isolating variables, for not only literacy education and measurement per se ... but also for the many of us, including myself, who are working in distinct contextual modes - health literacy for example (in my case).

In health literacy, there are really very few formal proficiency programs though they are slowly growing in number. There are more practice type efforts to help people engage with the health system and to help the health system better engage with people. But at this point in the United States, I think it would be fair to say that most people’s health literacy “proficiency” comes predominantly from practice, or lack of practice is probably more accurate. It might also be fair to say that - though some might disagree, which is fine - that most of the formal health literacy screening devices focus more on proficiencies - sometimes in a practice context.

So, to bring it back to the point of discussion - the LSAL may have highlighted something that might be very transferable to other literacy contexts and measurement processes - distinguishing between practice and proficiency and different types/ sources of acquisition. Those communities would want to know - was the LSAL able to make any connection between these different measures and functional/quality of life outcomes in the way of income, health status, etc.? If so, was there any indication, whether practice (quantity or type) or proficiency level were more strongly related to those sorts of outcomes theorized/ theorizable from literacy acquisition?

Further, when you move to measuring practice it seems the metric was generally quantity - is that correct? Did you develop any other approach to measuring practice than how much? I.e. what context, what media, what skills, etc.? (This is along the lines of DL’s question I suspect) What I’d like to learn/ apply from your experience to developing an acceptable measure of health literacy is how to best balance these possible attributes into a robust and comprehensive measure. How would you, if starting over, balance the measurement of proficiency and the measurement of practice to best gain a complete sense of where people are at? (Either/or for a one-shot study design or longitudinal with multiple time points)

Steve: Thanks for your comments, AP. You asked about whether LSAL could connect proficiency or practices measures to important outcomes like health status or income. We looked quite closely at connections with employment and earnings outcomes. Proficiency measures are strongly connected to these economic outcomes, which we measured with "hard" data: quarterly hours and wages linked (with our participants generous permission) to SSN-based unemployment insurance (UI) data. Proficiency is positively correlated with earnings (as it is in most cross-sectional studies of adult literacy) and proficiency growth over time is related to earnings growth. So not only does proficiency appear to be important from an earnings perspective, it appears important that adults increase their proficiency over time to support their earnings growth over time (please note no claim for causality is being made here). This is where I'd like to suggest practices measures fit into the economic outcomes picture: increased practice engagement leads over time to increased proficiency which leads to earnings growth.

You also asked about how we measured engagement (and changing engagement) in literacy practices. We used a variety of such measures. Although some were based on the frequency of particular practices, other measures were based on reported degree of skill and others on the emergence of new (to the individual) reading, writing or math practices. I'd be glad to talk more with you offline (or here online if others are interested in this level of detail) about how we constructed these measures.

Resources

Erik Jacobson, Sophie Degener & Victoria Purcell-Gates have conducted research showing that use of authentic literacy practices in the classroom promotes increased engagement in outside literacy practices and have also produced some guides for how to go about doing this in the classroom.

Using Life Maps in a Youth Literacy program in Canada as a tool for illustrating plot points in fiction, the report youth literacy program provides examples of how to do life maps and story boards in the classroom.

A demonstration on YouTube shows a blended battery of parts of the DIBELS and the Dolch word lists as a means of measuring the progress of beginning readers.

A professional development workshop conducted at the Literacy Assistance Center in Manhattan focused on individual assessment for adults that provides more specific information about where to start teaching.

Quick assessments of literacy skills was recently discussed on the LINCS Workforce electronic discussion list and can be found in the archives (posts 180-207).

Resources from adult literacy programs in the United Kingdom: 'Stick with It', along with diagnostic tools can be found at this site.

The California State Library System developed a simple reading and writing habits assessment instrument that could be used by learners and their library literacy tutors. It was known as the California Adult Learner Progress Evaluation Process (CALPEP).

Creating Life Maps: How to Make a Map of Your Life | eHow.com

Story Boards: How to Use Storyboards in the Classroom | eHow.com

Literacy in Everyday Life: Results from the 2003 National Assessment of Adult Literacy: The NAAL report, Literacy in Everyday Life.

Secondary analysis of the national NAAL data done for NIFL showed Internet use rose with literacy proficiency, Investigating the Language and Literacy Skills Required for Independent Online Learning.

Tom Sticht and William Armstrong report entitled Adult Literacy in the United States: a Compendium of Quantitative Data With Interpretive Comments presents a developmental theory of literacy and history of and sample items from standardized tests in the U.S. including military tests from World War I to 1990s and all mass literacy tests for adults from 1930s to the National Adult Literacy Survey (NALS) of 1993.

Sandra Van Fossen and Thomas Sticht, Teach the Mother and Reach the Child: Results of the Intergenerational Literacy Action Research Project of Wider Opportunities for Women. (Washington, DC: Wider Opportunities for Women, July 1991).

Sheehan-Holt, J., & Smith, M. C. (2000). Does basic skills education affect adults' literacy proficiencies and reading practices? Reading Research Quarterly 35(2), 226-243.