Using Data for Program Improvement Full Discussion 2007- Literacy Information and Communication System (LINCS)

Using Data for Program Improvement
Full Discussion

Discussion Announcement | Discussion Guests | Preparation

Descriptions of Data Use by Guests

Good morning, afternoon and evening to you all.

Today begins our discussion on Using Data for Program Improvement. I have pasted the announcement below - please note that there have been some edits to Guest bios. Also, I am trying to send 4 attachments (they are power points) but I'm having a tough time getting them through the server. For now, you have the announcement below and as soon as I arrange access to the Power Points, I'll let you know. If you received the original announcement that I sent, you have one of the attachments already ("Using Data Effectively DCornellier"). Thank you for your patience with this.

Also, I would like to acknowledge that today is Patriot's Day and is celebrated in many corners of the United States. Some of our guests, as well as subscribers, may not be present on-line with us today and this is fine - they will catch up with us tomorrow. For anyone ready to begin, please feel free to post your messages.

I'll start us off by asking our guests to briefly describe how they use data in their work to improve literacy services. Subscribers, please post your questions and share your own experiences using data. What type of data would you like to track and why?

Thanks!

Marie Cora

Assessment Discussion List Moderator

 

Good morning Marie. Thanks for giving me the opportunity to share my ideas with the members of this list. You asked, "I'll start us off by asking our guests to briefly describe how they use data in their work to improve literacy services."

I will begin my comments from my perspective as state agency staff member. In my opinion, if we (at the state office) want local programs to utilize data for program improvement, we have to continuously model those behaviors at the state level.

We have used data to:

  • inform the development of our policies and recommendations;
  • highlight program practices that need attention;
  • target providers for on-site monitoring;
  • identify high-performing providers to learn from, about strategies that work;
  • provide technical assistance and feedback; and
  • enhance our professional development model.

Ajit

Ajit Gopalakrishnan

Connecticut Department of Education

Middletown, CT

 

Hi Everyone,

It is a pleasure to be a guest on the list this week and my thanks to Marie for asking me and organizing this.

There is a strong federal initiative to promote use of data for program improvement at the state and level. Through the National Reporting system project which I direct, we have conducted several training and technical assistance activities over the past 4 years on this topic, including two general training seminars on using data and more specific ones on promoting adult education programs, monitoring, developing state and local report cards. All of the training materials and other information on the topic, including sample work from states, is available on the NRSWeb website, which Marie has referenced.

All of the other guests have done a great deal of interesting work and many of them attended our training (and Sandy Strunk served as a trainer for us a few years back).

I will be interested to get your questions and learn of your experiences, as well as the responses from the other guests.

Larry Condelli

 

Good afternoon, everyone, and many thanks to Marie for putting this panel together. Using data for decision-making has been a passion of mine for some time. I think by nature, I'm just one of those people who always has questions about things - and I love testing my assumptions about how the world works. It started some years ago when Pennsylvania first began its work with Project EQUAL (Educational Quality for Adult Literacy). Judy Alamprese was our professional developer and she challenged each of our sites to pose a question related to program improvement, collect some data related to the question, analyze that data, and then develop a program improvement plan based on our data analysis. I took to the process like a fish to water.

I still remember the first teacher I walked through this process with. She was a beginning ESL teacher - one of our best. She wanted to know if she was teaching the sorts of things her students were most interested in learning. This sounds fairly basic, but when none of your students speak English, it's a challenge to know if what you're teaching is what they most want to learn. To collect our data, we partnered with the Advanced ESL class and had them translate some basic questions into nine languages. At that time, our beginning curriculum was based on work skills and basic communication for the workplace. What we learned was that this particular cohort of students wanted to learn about shopping and healthcare - work skills were at the bottom of their list! More importantly, we learned that our assumptions about what students want and need to learn are not always accurate. We were teaching job prep when they wanted to know how to order a quarter pounder with fries (we've since started a health literacy unit, as well ;-).

That was the start of a program improvement process that we have used in our local program ever since. We have a program improvement team - which most years, is representative of all facets of our adult education program. I say most years because this year, for the first time, the program improvement team is limited to site managers and supervisors who have been working together to develop an ongoing progress monitoring system. Our progress monitoring system provides our learners with individual written feedback on a quarterly basis related to their actual attendance compared to the number of class hours available, achievement on standardized assessments they have completed, and goal attainment related to what they told us they wanted to achieve when they enrolled. This is the first year that we have shared written progress data with our learners in this manner. If using data for decision-making is a powerful tool for changing what we do, we wondered what impact the data might have on our learners. Our assumption is that it will have a positive impact on student retention, but implementation isn't stable enough yet to do a comparison study. I'd love to know if anyone else is doing this sort of written student feedback and what they've learned as a result.

Other ways we use data? Well, we always have one or two action research or inquiry projects running. Last year, two of our staff did a great action research project on the question, "Why do some of our students complete our orientation process but never reach enrollment status?" When this question came up in our program improvement meetings, we all had opinions (our team is never at a loss for opinions). Some speculated that "childcare and transportation" are the issues. Some of us were quick to suggest that "childcare and transportation" are the universal retention scapegoats for our field. Others felt that quality teaching was the issue. Still others suggested that the problem rests with the motivation of our learners. Without data for decision-making, we would have no mechanism for moving this discussion beyond the opinion stage.

We also collect customer satisfaction data twice a year. One day in the Fall and one day in the Spring we survey everyone who is in class with a simple instrument that rates the student's satisfaction with the classroom environment, the teacher, instructional materials, and goals. In fact, we just finished our Spring cycle and last week I got the report comparing our Spring 2007 student satisfaction numbers to our Spring 2006 numbers. Here's an interesting snippet - 69% of our family literacy students (N=42) strongly agree that they can use what they learn in class at home or at work. 58% of our ESL students (N=263) strongly agree. 27% of our ABE/GED students (N=150) strongly agree. Another interesting tidbit - 76% of our family literacy students strongly agree that the teacher starts class on time. 86% of our ESL students strongly agree. 59% of our ABE/GED students strongly agree. Well, as usual, one question always leads to another. I'm not exactly sure how to make sense of these numbers, but my next step will be to look at the disaggregated data by classroom. The good news is that all of these percentages are up from last year. Either we're doing better or we have an especially agreeable cohort of learners.

I think the point I want to make is - for us, data for decision-making is tied to our constant curiosity about the work we do. Yes, we routinely look at our performance against state standards - but that's a routine part of our work. The more interesting investigations tend to stem from something we notice and wonder about. Like - do ABE/GED teachers in our program really start class later than ESL teachers or does it just seem that way because so many of them start the day with individual work rather than group lessons?

Sandy Strunk

Program Director for Community Education

Lancaster-Lebanon Intermediate Unit 13

Lancaster, PA

 

I am, like Sandy Struck, a product of Pennsylvania's program improvement process, Project Equal, and I was for a number of years a trainer in using data for decision making. The heart of this training was to help programs to describe, in detail, an area for improvement within their program, to ask a question based on that particular area of concern, to look at program data that related to the problem area, to come to some conclusions based on the data, and then to take actions that would result in program improvement. This sounds very simple, but in fact, it was a hard road for all of us, and there were difficulties at every step.

In the beginning people (most of us) tended to ask questions that were too broad (or too narrow) or too vague, and we tended to look at aggregated data only --and even when there was data enough to draw conclusions from, our action plans often seemed to have little relation to those conclusions. In short, learning to use data for program improvement was a surprisingly slow process, and involved the creation of a habit of mind that was not at all "second nature" to most of our program directors and their staffs.

I think that our tendency now is to think that using data for making program decisions is just common sense--but I think it's a more complex issue. And one that has implications for professional development at all levels.

I would say that over time some of the programs I worked with developed the habit of using data for decision making, and that others reverted back to decision making by intuition--as Sandy said there's rarely a dearth of opinions in our programs.

Karen Mundie

Associate Director

Greater Pittsburgh Literacy Council

Pittsburgh, PA

 

Good morning from Rain soaked Boston. One area that we use data to improve literacy services in our program is looking at attendance data. In Massachusetts, we have a DOE web based system that allows us to view class attendance data, as well as other pertinent data that gives us tools to improve literacy services. I review the data and look at each individual class to see how attendance is. If attendance is low one month I then review what happened. For example: Was there inclement weather? Or have any natural disasters occurred in homelands of students? Was the teacher absent for a period of time? If yes, I know that is an outside factor. But if I see that attendance is low for more than a month, I investigate. Ask the program advisor to review calls to students when absent to find out the reasons given for dropping out and share the student feedback with the teacher and ask for her/his opinion. If the overriding case involves students feeling lost over what is being taught, the teacher opens discussion about the lessons and works with the students on the topics. Students feel empowered in their learning and attendance improves.

Toni F. Borge

Adult Education & Transitions Program Director

Bunker Hill Community College

Chelsea Campus

Chelsea, MA

 

Good Morning Everyone!

I am sorry to be so late joining the group. I am one of the panelists, Rosemary Matt. Just recently, I accepted the position of NRS Liaison for New York State so monitoring data and providing technical assistance to programs in need is now my entire focus. We also have a large contingent of programs that provide service through a volunteer network in a one to one tutoring arrangement. Your concerns Mary (see posts by Mary G. Beheler in "Searching for Usable Data") regarding the inability for programs such as these to meet performance benchmarks is shared by New York programs as well. Our state department has thoroughly considered the population these folks serve and consider that to be a mitigating circumstance when assessing their performance. The value of these organizations serving some of our lowest skilled readers is well known and appreciated. In a state the size of New York it is possible to absorb the lack of educational gain increments from these agencies as they are balanced by other programs serving students for whom gain is eminent.

At the same time however we have worked closely with these programs and their statewide leadership team to provide technical assistance in the area of assessment. As they learn more about the strategies and nuances that evolve around the NRS accountability system, they are better able to show whatever gain is possible from their students.

As some of you are aware, New York also utilizes the program level Report Card. We attended the training two years ago that was provided by Larry and his staff at AIR. I would strongly recommend this training to any state considering this accountability tool for programs. We have made incredible advances in terms of identifying high performing programs and targeting those in need of technical assistance through our Report Card Rubric. Marie has posted three power points that I offer in training built around this rubric. To further support our volunteer programs, our state department has chosen to rank these programs among themselves providing a homogeneous category specific to their needs. They are not measured against the cohort of traditional adult education programs.

Another strategy we have recently embarked upon is through our statewide data system, we have introduced Collaboration Metrics. Many students working first with these volunteer programs while they are at minimal skill levels will eventually move into traditional programming and continue to succeed through the educational levels. To ensure the volunteer programs remain tied to the student's success, they are informed of the students' progress through the data system and can subsequently report on that gain as well.

These few methods of support have been well received by our volunteer affiliates. Hope they may give you and your state some thoughts for the future.

Rosemary

Rosemary I. Matt

NRS Liaison for NYS

Literacy Assistance Center

New Hartford, NY

 

Hi,

In Massachusetts we have developed and implemented a plan for ongoing staff development on using data to promote continuous improvement. Our state-wide professional development system, (SABES) has developed a program planning process that incorporates NRS and other data to promote continuous improvement. They utilized the following approaches to providing support for program planning: (1) a comprehensive 12-hour course, offered in all five regions of the state, on planning for program improvement, including a module on types and sources of data, data quality, and data analysis. The course culminates with presentations by participants on their program planning activities. (2) On-site coaching to selected programs in need of a tailored approach (3) A separate data module from the planning course presented twice as a separate workshop (4) Program and staff development sharing groups to provide forums for directors and practitioners to share experiences. Follow-up for all participants who attended this training is provided. Programs are now required to submit program improvement plans that are tied to their performance in attendance, average attended hours, pre- and post- testing, learning gains and eventually, the achievement of student goals. SABES provides ongoing support to programs in developing their continuous improvement plans. We have found that offering more intensive courses for several staff members at local programs has been very helpful for our local programs.

Donna Cornellier

SMARTT ABE Project Manager

Adult and Community Learning Services

Massachusetts Department of Education

Malden, MA

 

Hi all:

I wanted to chime in about our program's use of data since this is the focus of our discussion. Coincidentally, I am in the process of writing our proposal for next year, so I am knee-deep in data even as we speak!

The use of data takes many forms in our program. We look at what most people consider the "hard data" -- the raw numbers with regard to attendance, learner gains, retention, goal attainment, etc. We believe; however, that the numbers alone provide an incomplete picture of what is happening, so we use the numbers as a basis for discussion, not decision making. After analyzing the numbers, we begin to look at additional sources of data that we find essential in informing our planning---meetings with staff, classes, our student advisory board, and focus groups.

Here's an example we're currently working on---we did a two year analysis of learner retention, and began to document why students did not persist. We found that the retention for students who enrolled after January 1 (our programs runs on a school calendar year from September to June) was significantly lower than the retention for students who began in September. Even more compelling, we learned that the retention for students who began after March 1 was 0%.

We met with staff and students, and did some research around student retention issues. After a year-long process, we decided to pilot a "managed enrollment" approach. In Massachusetts, our grantor (MA DOE) allows us to "over-enroll" our classes by 20%, so we enroll 20% more students in the fall. When students leave, we "drop" the overenrolled students into funded slots. This allows us to keep the seats filled even with the typical attrition that occurs.

In January, when we do our mid-point assessments; we move students to the higher level who are ready to progress….that typically leaves several openings in the beginner levels and we begin students in February as a cohort. This year, we implemented new orientation programs including a requirement that new students observe a class before enrolling.

While it is still too early to tell if these new procedures will have a positive impact, we are hopeful and we know anecdotally that the transition seems to be easier for some of these students. We are eager to look at the data at the end of the year to analyze the effectiveness of this plan.

As we begin to look at our data, we are finding that there seem to be a unique set of issues for our beginner ESOL students. We suspect that the lack of effective English communication skills to advocate for themselves with employers is influencing their attendance and persistence. This is an issue that we are beginning to tackle in terms of policy. Do we need to have a more flexible, lenient policy for beginner students? Is there a way to support students in addressing these employment issues? How can we empower students more quickly? Are there other issues for these beginner level students that affect their participation? As we enter these discussions, the numbers will provide a basis for developing strategies, but the students themselves with be our greatest source of valuable data.

Luanne Teller

Director of ABE (ESOL) and Transitions to College Programs

Massasoit Community College

Stoughton, MA

 

Hi Luanne,

I find it interesting that what you are finding in data seems to be consistent with what we see in our GED classes here in Arizona. Often the last group who enter in March are the least likely to stay with the program until posttesting, and the August group seem to have the highest posttesting and retention rate.

Tina Luffman

Coordinator, Developmental Education

Verde Valley Campus

 

A few posts ago, Luanne spoke to her concerns with retention particularly as students entered at the later portion of the fiscal year. Luanne, you mention a student retention rate of 0% for those entering after March 1st, I am curious, what is your benchmark beyond which you expect students to remain in programming. Is there an hour allocation or are you basing your calculations on a completion of the session only?

Also, I wondered if Massachusetts employed any distance learning for students leaving a program for employment. Larry, I am sure you have heard New York voice our concern previously regarding our data indicating what appears to be a disincentive for programs encourage students to enter employment as that often results in the student leaving the literacy program prematurely and not showing educational gain. Have other state's data shown this trend?

Rosemary I. Matt

NRS Liaison for NYS

Literacy Assistance Center

New Hartford, NY

 

Good questions....

For us, it's sometimes a question of poor attendance, and then we have to get to the heart of the problem. More often however, when it came to late-year enrollment, students simply stopped attending at all.

It was extremely difficult to get feedback of any kind from these students, so we had to piece together bits of information we could gather from students, teachers, and other students in the program who were friendly with the departing students.

Our classes end in June, and begin again in September. I reviewed all the data for the year for students who left and never returned; I didn't specifically set March 1 as a cut point---it revealed itself to be significant upon review of the dates when students enrolled and then left.

When compiling the data into charts, it became apparent that any enrollment after the first half of the year was compromised; but all enrollments after March 1 were not succesfully retained.

We are just beginning to work with one of my programs on distance learning as a supplement to classroom instruction...so new that I have no idea where it's going!

Hope this clarifies...Luanne Teller

 

Using Data to Identify Professional Development Needs

Hi there,

I like to use state database information to show me which teachers are needing assistance and which teachers are modeling good practices. The database is certainly not a final word, as we all have had groups of students that performed well or poorly regardless of instruction. The data is a good place to show red flags, however. Student assessments and staff self-assessments are also great for predicting professional development needs. Data can also help us see what people groups we are reaching with advertising, and which people groups are not. Then we can create new means of recruitment for our program.

Tina Luffman

Coordinator, Developmental Education

Verde Valley Campus

 

Hi Tina,

Using the data to identify which teachers need help and what is good practice is very intriguing idea. Can you tell us more about that -- what indicators you use, for example?

Larry Condelli

 

Hi Larry,

Yes, I look at which classes have students with strong attendance records as well as which classes are making their educational gains for NRS reporting purposes. The classes who are keeping the students (retention) as well as those showing that they are learning tend to have teachers using better instructional plans. For example, one semester I had a new teacher who was making great educational gain progress in Reading and Language, but none of the students were coming up in their Math scores. I had to observe the class myself and talk with the teacher to identify where the problem existed, if indeed there was a problem. We did intervention and garnered a few math gains as well by the end of the semester.

We also have student assessments through our college, and these produce indicators as to which teachers are doing great and which are frustrating learning. The better indicator is the teacher self-assessment form where we ask teachers to let us know which items they feel confident in and which they need help in. The purpose of this self-assessment is to let the administrators and coordinators know what professional development activities to prepare for our upcoming staff development day.

Tina Luffman

 

In addition to the other great ideas discussed yesterday, I'd like to offer an additional way to analyze NRS data on the local level. If you divide the contact hours by the number of enrolled students in each educational functioning level (EFL) on Table 4, you will get a rough idea of how long students in each EFL are staying in class. If you find that students in one of the levels - say beginning literacy - are leaving before they have enough hours to post-test or make a level gain, you may wish to examine further the instructional strategies, curriculum, and professional development needs of staff serving those learners. If the average contact hours are high, but level gains in that EFL are low, once again, a review of assessment, instruction and curriculum may reveal specific professional development needs for your program.

Barbara Hofmeyer

Coaching Consultant for
Indiana DOE, Division of Adult Education

 

Barbara,

That is a great idea and if you can break it down even further -- by site or class, for example -- you can get even better insights.

Larry Condelli

 

Tina and Larry,

I do this, too, but I'm also very interested in positive deviance. How is it that some teachers, who work in very challenging settings, are able to produce such significant results? For example, there's a teacher in our program, let's call her Miranda, who consistently has high enrollment, wonderful retention and excellent student achievement. I can assign her to ABE/GED, ESL, family literacy, day, evening - it just doesn't seem to matter. It's a much harder data collection question, because she thinks she's doing what everyone else is doing and, ostensibly, she is. What jumps out at me when I visit her class is the sense of community she's able to build that seems to be based on her belief that her students will accomplish great things.

Sandy Strunk

Program Director for Community Education

Lancaster-Lebanon Intermediate Unit 13

Lancaster, PA

 

Hi Sandy,

I think you answered your own question. Teachers who create a sense of community are always already more successful at retention. Sometimes it is difficult to put a finger on exactly what that factor is that creates the environment, but a caring instructor is predisposed to generate great results regardless of time of day, demographics of the classroom, or whatever variables are offered.

Tina Luffman

 

Hi Sandy,

I agree with Tina that by creating a sense of community students feel comfortable and open to risk taking in their learning. I have a transitions to college program that is based on a cohort model. The students take the same classes and build bonds with each other that carry over when they enroll into their college programs. The retention in that program is high, 86%. Kudos to your teachers.

Toni Borge

 

Retention and Persistence

(For more discussion on the issue of retention, see Tracking Learner Outcomes: One-Year versus Multi-Year Reporting Periods)

Hello Tina,

I wonder if you could say more about what exact data you use to see what your teachers need. Is it attendance or level gain, or something else?

We use attendance data to track total hours students are attending and to determine if they are eligible to post-test (minimum of 40 hours in NM). If students attend at least 75% of the potential hours, then the student is eligible to get a certificate at the end of the session (usually a 12-week session that meets 5 hours each week). We also see what the overall retention rate is by teacher as well as the post-test rate and the level gain rate. I agree that these could indicate a need for training.

We are interested in other ways that programs use data to help with retention. Thank you.

Barbara Arguedas

Santa Fe Community College

Santa Fe, NM

 

Dear Colleagues:

Here at Franklinton Learning Center, we use data everyday in our program to help us track and improve the end results coming out of our program. We use enrollment data to check the reach of our program, average hours attended data to check the depth of engagement of students, and numbers of students through the door versus number completing enrollment to help us improve retention in the crucial orientation period of classes.

We have a program called ABLELink here in Ohio that has made it very easy to track some areas. It has also allowed us to compare statistics from one year to another so we know how we are doing in comparison to previous years. By tracking information collected on attendance, educational gain, hours of engagement and accomplishments, we have been able to improve all of these efforts.

Tracking and constantly checking this data is what has made it possible to improve. We can easily pull up reports on testing, who has tested, progress made, who hasn't tested, attendance, etc. We can organize that information by class, by teacher, by program, or by site, which allows us to compare effectiveness of programs and staff and assign responsibility for improvement where needed.

I would like to be able to track consistency of attendance over time not just total hours attended. I think this might give a better picture of the progress to be expected than the total time attended does. I would also like to understand more about how I can use all of the ABLELink data collected to improve my programs overall effectiveness.

Respectfully submitted by,

Ella Bogard, Executive Director

Franklinton Learning Center

Columbus, Ohio

 

Hi Ella,

Disaggregating by class can be very effective to understanding of what is going on.

I wanted to comment on your last remark about tracking consistency of attendance.

Attendance and persistence are very popular topics these days and most data systems allow for tracking of student attendance and persistence patterns. One thing you might consider looking at learners who "stop out" -- have sporadic attendance patterns, attending for a while and coming back later. Another measure is the percent of time possible that learners attend. You compute this by dividing the attended hours by total possible (e.g., learner attends 8 hours a week for a class scheduled 10 hours a week=80%). Some research I did on ESL students showed that those who attended a higher proportion of possible time learned more, independent of total hours. I think this is so because this measure reflects student motivation to attend.

Identifying and studying "stop out" learners might tell you a lot about why these type of students don't attend more regularly and can inform you of needs, which could help in designing classes and programs for them.

Larry Condelli

 

Identifying Program Areas in Need of Support

Larry and all,

Table 4 and Table 4b (of the NRS Reports) contain a great deal of information, which if analyzed by program, site and class, can help begin to identify strengths and areas needing additional support. For example, these two tables can help program administrators/teachers identify some of the following reasons behind low percentages of level gains:

  1. Students aren't staying long enough to post-test. If the total contact hours divided by the total number of students is less than the number of hours required for post-testing, we know we have a retention problem. Now we can look at: the intake process and transition into class; whether instruction is meeting student needs/expectations; whether curriculum and materials meet the varied levels and learning styles/modalities of students; outside barriers that keep students from attending; etc.
  2. Students are staying long enough to post-test, but are not being post-tested. (Compare the number of students on Table 4 to the number on Table 4b.) If this is the case, we can look into whether this is a staff problem - i.e. they have no sytem to know when a learner has enough hours to post-test; they need assessment training/support; etc. - or a student problem. In some programs we have found that when students learn they will be taking a post-test, they stop coming for a period of time. In this case, we need to help students understand that we are assessing our effectiveness, not their intelligence.
  3. Students are being post-tested, but not making level gains. (Compare the number of students on Table 4b -i.e. those who have been post-tested - to the number who show a gain on that table.) If this is the case, we can begin to look at instructional strategies and curriculum/materials.

There is one other possibility that I can think of. Students may be post-tested and making gains, but in some cases there is a glitch in the paper flow and for some reason information is not being submitted for data entry or data entry is flawed. If you have a system for submitting class data back to each teacher for review, they can help you identify if this is the case and where the glitch may be taking place.

In short, I love the NRS reports because they tell you so much about your program if you take time to analyze them. I hope I haven't taken up your time telling all of you things you already knew. I'm looking forward to hearing some great new ideas this week.

Barbara Hofmeyer

Coaching Consultant for

Indiana DOE, Division of Adult Education

 

Barbara,

I like your thinking!

What we advise is to monitor periodically (monthly or quarterly) program pre-posttest rates and retention factors. In this way you can see if a program has had a problem in time to intervene to explore the problem and correct it.

In our Data Detective training we illustrated how to use data as the starting point for exploring what might be happening in your programs. As you note in your example below, low educational gains may be due to low retention, program staff not administering the posttest or students just not making gains. Data can help you pinpoint which of these three problems may exist. Once you know this, you can dig deeper to look for the underlying issues on which you can direct program improvement (such as changes in class schedules, better support and training for assessment or changes in instruction that may be needed).

Larry Condelli

 

Searching for Usable Data/Dissatisfaction with Available Tools

We tutor adults. No children.

Almost all our students are at Beginning Literacy to High Intermediate ABE level. Almost no high or even low adult secondary. At the secondary level we only get the students that can't (or won't) tolerate study in a regular ABE classroom. ESL instruction is done by a different organization, with paid teachers.

We net 20 to 25 students with more than 12 hours of study each year. We are so small that sometimes an entire FFL will have only one student in it. When that happens the only question is, "How many advanced a level: 0 or 100%?"

We deal with students on a highly individualized basis. One may need to learn to read again after having a stroke or a fever. Another may have taught himself to sight read at a very high level, but neglected to teach himself any spelling or writing skills. A high school graduate may not have learned even his ABCs, for whatever reason. One or two students a year might have an employment or higher education goal. (Then WV can't verify it, if the student works or studies out of state.) I can safely say that no two students have been alike in the nearly ten years that I have been here.

I genuinely *like* statistics and know they can be very useful, and don't mind gathering data to be put in a bigger pool if what comes back is helpful. However, if a level has only 3 students, is the data even "statistically significant" if just 2 of them are available for both pre and post assessment? 2 of 4?

Some things are better seen by microscopes and others by telescopes. Right now neither NRS nor CASAS seems especially useful at a local level. Maybe all I need is to find out how to focus them. Maybe they should be just trashed. They may cost more to use than they return in terms of time and money and *stress*, on us and especially on our students.

I'm from West Virginia, not Missouri, but, "Show me!" (Please.)

Mary G. Beheler

Tri-State Literacy

Huntington, WV

 

Mary,

I think you raise some valid concerns. When you're working with a small pool of students like this, aggregate statistics can be rather meaningless. I would think the most helpful data for you would be individual diagnostic reading assessment and progress monitoring data. Are you familiar with the Adult Reading Component study and the work related to using reading profiles? You might want to check out http://lincs.ed.gov/readingprofiles/. I'm wondering if the component reading assessments wouldn't go a long way toward "focusing" the reading instruction you offer based on each learner's profile. That doesn't get you off the hook for NRS reporting, but it does provide a mechanism for meeting the highly individual needs of your learners. Just a thought.

Sandy Strunk

Program Director for Community Education

Lancaster-Lebanon Intermediate Unit 13

Lancaster, PA

 

Yes, WV would "consider our mitigating circumstances" if we had missed a few of our goals. But we would also get a warning about, "Money is tight, and if you don't make your goals...."

We have made our goals, but sometimes completing the nagging and pleading it takes to get the one last student in to post test (and succeed) has been frighteningly close to the fiscal year deadline. We had a group of 5. Three post-tested: 2 "improved a level" and 1 did not (even though he had actually gained more CASAS points than the others). Therefore, our NRS score at that level was only 40%. We needed something above 50%. We had to get another student assessed!

One missing student had moved hundreds of miles away. The other had a new job providing him many over-time hours. He finally came in on the last possible day to post test and fell across the FFL line with a 1 or 2 point gain. Our improvement rate for that level suddenly jumped from 40 to 60%! That's silly (IMHO).

To complicate things a bit, other funders, such as United Way, look at these statistics as well. Explaining levels and how some students can improve quite a bit and not be a "success" is hard to do without sounding like we are just whining. Fortunately, they have allowed us to use any 5 point CASAS gain as a measure of success instead of using the NRS brackets. And if one student comes up 20 points, we get 4 United Way credits! (On the NRS report it counts only one gain in his entry level, though he may have crossed more FFL lines.

Doesn't using the different level brackets, instead of total points gained by each individual, distort the results, even in larger groups? Why are they used?

Since the statistics for small groups can be changed so drastically by even one individual, why not pool the results from groups like ours? (Call the pool "The Long Tail" if you must: http://en.wikipedia.org/wiki/The_Long_Tail) The pool might give researchers an idea of whether classrooms or one-on-one tutoring is most effective at the lowest literacy levels.

So, if you have any influence, please try to persuade NRS to describe the goals for small groups in terms that make more sense for our situation. Right now we are like little kids clomping around in Mom's high heels. It is hard to work that way, and very far from useful.

All of which is not to say that individual assessments are not quite helpful for spotting individual problems and successes. They do help.

Mary G. Beheler

Tri-State Literacy

Huntington, WV

 

Hi Mary,

Using the CASAS 5 point gains and United Way credits are good ways to help your program get the recognition and credit for helping learners. We encourage states and local programs to use such strategies to demonstrate gain for small programs such as yours and others that serve students at the lower literacy levels. This is not a distortion at all but a more accurate way to look at your program gains in this case.

The NRS is a national system and for that reason, there has to be some generalizations and standardization so that cross-program and cross-state aggregation is meaningful. But there is flexibility at the state and local level to use other measures to show gains and progress and to meet the accountability requirements we all have to face. We hope states and local programs take advantage of this flexibility at the local level, where there is no need for national aggregation. Your state or local program in your area could pool your results for your own use.

Larry Condelli

 

Does GIGO apply to the data all of us, big and small, are gathering?

The information gleaned from the CASAS Life Skills (or any other) assessment tool can be useful for spotting problems and successes, but because of the *multiple layers of skills involved in answering any one question* the specific question(s) missed must be looked at very carefully. But, getting the student feedback that makes looking carefully possible gets into problems of assessment confidentiality.

Example: A excellent student may understand *everything* else about a certain CASAS map question, but if he or she has heard the expression "X marks the spot," and assumes X means the goal, not the beginning, this question will be nonsense. I don't think the check-off sheet of demonstrated skills specifies, "Knows that on *this* map X means, "Start here." If the student's tutor or I can't discuss a missed question with the student, how will I discover that?

How do we know that the question a student answers or misses actually assesses the skill the assessment manual tells us it does?

I was scoring an answer sheet and was dismayed at how poorly a student was doing, when I noticed I was using the math answers, not reading. I switched to the correct set, and the student made even a worse score! Both scores were in the "valid" range, too! How much confidence should I place in that assessment?

Having the questions in a booklet and marking the answers to multiple choice questions on a separate sheet may be a skill even somewhat advanced adult students do not have. Because our student workbooks don't use multiple choice questions, we have actually created lists of number-letter pairs to see if a student can mark the letter in the appropriate column of each numbered row on a separate answer sheet. That's all. (Did that after a strong level 2 student marked the answer sheet by page number, not question number, with answers to 2 and 3 questions marked in the same row.)

Has anyone made an effort to see if the lower level literacy students want to learn what the CASAS or other accepted NRS assessments wants us to teach? Lots of our students are on SSI. They don't see the point in learning about employment applications. That often means any question about employment forms isn't important enough to take seriously, even if the ones about other forms are.

Colleges have sense enough to make all freshman year classes pretty generic, and leave the "major" study to later years. Is it useful for assessment of beginning literacy to get so specific so soon? Whatever happened to "Learn to read; then read to learn"?

We used to use the quick and unintimidating SORT-R for student assessment. Even on that simple test almost all men missed word "dainty," no matter what their reading level. Does anyone know if any specific question(s) on the assessments used for gathering NRS data is answered incorrectly by most students at any one level? Or if a significant number of students in the Laubach series misses different questions than students in Challenger or Voyager, or another series?

Mary G. Beheler

Tri-State Literacy

Huntington, WV

 

Mary,

Where did the valid score place the excellent student who was doing poorly? Since we work from the scale score and not the number correct I do not know how to interpret the doing so poorly. When working with teachers and students, we try to place an emphasis on the where the student falls on the scale or NRS level. In training with teachers, we always stress that students are not a single level but students have range of strengths and needs and we discuss about how to present this information to the student.

I agree that a missed question does not tell us why the student was not able to correctly answer the question; however, by taking "multiple measures" of student performance over time with a variety of formats tells me if the student has the concept and if the student can transfer what is known to different contexts.

In the example cited below, in most exercises that use maps the X is also marked with the words such as "you are here." It would seem that the student using "X marks the spot" and not "X you are here" would indicate a reading/literacy problem. As a teacher I would watch that student read directions and try to perform tasks. Does the student ask others to help explain the task? Does the student ask me, the teacher, for help? Observing the student's behavior and patterns of work is part of the assessment process, and from your other posts I think that is what you seem to do in your program with such a variety of learners at vastly different levels.

The last point you make about application forms when the student does not intend to seek employment presents a challenge that we always face. If not application for work, all ESL students are faced without filling out forms and as a teacher I want to have students learn to "transfer" knowledge so I think I might use your quote "learn to read and then read to learn." That is I would use the application form for work as a transition to other forms and so filling out forms is the concept that I am teaching as well as helping students understand that there are many forms that they will have to fill out in English and forms have certain questions and vocabulary in common. That way there is a relationship between the assessment and what is taught. The context of the question is not as important as the ability to read and fill forms. I explain to students that the employment form for them is not important but I would try and brainstorm with them when they might need to help someone with an employment form.

Since you mention using CASAS life skills test you might want to look at the new tests CASAS has developed such as the Life and Work. If your state has other approved assessments for ESL then you might look at those assessments to see if you think it is a better match for your curriculum. I have found that CASAS Literacy tests and level A reading tests are very sensitive instruments in tracking lower level student learning gains.

Dan Wann

Professional Development Consultant

Indiana Adult Education Professional Development Project

 

We have no ESL students. They are all basic literacy students, primarily at FFL levels 1-4. We use the Life Skills assessment because the one for Employment Skills is just too irrelevant to our many disabled and retired students. I have not seen the new series, but the term "work" makes me suspect it might bring up some of the same issues. And right now the both the time and money budgets are too tight to experiment.

The map question, which does *not* include a note that X means "you are here," has baffled many students, not just the excellent ones. Hate to "teach to the test" but now I try to remind our tutors to mark a "You are here" X on at least one of the maps they use in practice. It is just one of those things you either know or you don't. ("Everything is intuitive, once you know how," is one of my favorite quotes about learning a yet another computer application.)

I wanted to see which questions our students missed most, and over the years the map question with the unmarked X has been prominent, though other map questions were not. (The one with the left pointing north arrow is the second most missed map question.)

The person who did worse after I began comparing his answers to the correct master is a very artistic student who once just filled in the bubbles on the answer sheet in a pretty pattern. (Had to have a serious talk with him about that.)

Scaled scores are derived from the raw score. If one is low, so is the other. I used the term "doing so poorly" because I knew he had previously answered more than half the questions correctly on a parallel assessment. This time I was seeing very few correct answers, even at the very beginning of the assessment, where the questions tend to be easier for most students.

Sometimes he is fully engaged in what he is doing and we get a good assessment. When he is in a "lets get this over with" mood, anything can happen. The day he did better with the wrong answer sheet was a day he was doing random guesses. Even so, marked enough of them correctly for his scaled score to be in the valid range. However, the ones he hit correctly were scattered all over the place. As a human being could see the randomness. I just junked what he'd done that day and gave him a different assessment a couple of months later, when his attitude was more suitable. Glad the random marking didn't happen when we were up against the end of the fiscal year deadline!

Mary G. Beheler

Tri-State Literacy

Huntington, WV

 

Folks,

I hope that this is not a duplicative posting, but one implicit assumption being made here is that NRS is itself valid. Many of the postings assume that it is, and work from there to elaborate usage for various accountability and program improvement purposes.

I work in Ohio and have noted the requirements promulgated by DAEL that other systems must align to NRS (most recently last fall with the Federal register outline of the periodic review process) and in various letters from the AIR psychometricians to states or vendors.

When a national system is above all of the state systems, I would think that we should strive to keep it on the quality control line as well as all of the underlings.

In my reading of the history of NRS, I have not seen the sorts of construct validity studies that seem to be requested / required for state tests and vendor tests. Consider the width of the levels and the statements that comprise them.
There was a thread of work coming out of the CRESST organization a few years ago (Eva Baker's name comes to mind) called Standards for Educational Accountability (SEA) which might be useful if folks wanted to think about validation of NRS.
Another useful framework is the Data Quality Campaign (DQC) that Achieve, Inc. and other organizations are using to improve the seamlessness of transitions in P-16 systems. In Ohio, for example, we have ABLELink for adult basic education, EMIS for K-12, and other systems for One-Stops and post-secondary. Many of these systems end up in front of the "tower of babel" when they try to communicate, as noted in some of our "Data Match" to unemployment wage records.

JTA

CoreComm Webmail.
http://home.core.com

 

Maybe aggregate figures for large groups can give some indication are doing better or worse than they used to. Maybe.

But NRS gives surely gives meaningless results for small groups like ours. (Total 20-25 students with 12 or more hours per year) One individual can vary the group results far too drastically.

My first posting on the 16th asked a question about the statistical validity of the figures for a five member level containing only 3 or 4 post-tested individuals. If these figures has real meaning for our group, would someone please explain? (I'm not a statistician, but I have had some college math classes, including calculus.)

The standard for a level with only one member could be 2% or 98%, it still amounts to pass/fail. At level 4 (high intermediate ABE) the student could improve more than 10 CASAS points and the whole level, and therefore our group, would not be an NRS success!

Yes, I know, the state will take our smallness into consideration when looking at our figures, but the rules are not written that way.

It seems like it would be useful for small groups to have their results bundled for comparison to the big guys. Maybe invent a standard for a whole organization, if it is big enough. If level-by-level analysis of small groups is as nonsensical as I think it is, why burden us with it? Let us just report.

And you are right, JTA, the whole NRS system might be useless in the long run. Maybe even harmful.

Mary G. Beheler

Tri-State Literacy

Huntington, WV

 

Data and ESOL Students

I would like to add a few comments on retention and ESOL students.

We have recently heard a lot about "stopping out" and I think that can pertain to ESOL learners for many of the same reasons as ASE/GED learners - with the addition of issues such as stages of acculturation and/or home country responsibilities which may cause ESOL learners to withdraw for weeks or months and then possibly return.

I would also like to raise the issue of the mobility of the ESOL population. We see migration reports on immigrants and settlement trends and I often wonder how much of a difference in retention these trends makes when comparing ASE/GED retention rates with ESOL.

I think of the "stopover" trend we see sometimes in ESOL here in Baltimore, MD where non-native speakers will enter and only temporarily reside her before moving to an intended more perm ant location. This obviously has great impact on retention. When comparing ESOL programs statewide or nationwide, the "stopover" trend may negatively impact the retention rates of certain programs.

Another thing we see is "shift" or movement around the beltway (as we call it). We have major ESOL class sites at locations along the Baltimore beltway that roughly encircles the city and we see contraction and expansion at these sites based the movement of the ESOL population. We will see that a site may suddenly have low retention across ALL six or seven ESOL classes offered - even the classes with veteran/experienced teachers with a great track record of retention. In some cases, the same teacher is also teaching at another site and his/her class there is doing well at that site. Both of these things show that attrition is not likely a result of instructional issues.

When we see this contraction of a site with mid-semester attrition, we can sometimes predict that at another site we will experience a boom in registration the next semester. It depends on if it is more "stopover" (with learners leaving the area entirely) or just "shift" (learners relocating within the area). If it is the latter, learners who leave one site mid-semester will turn up to register the next semester at another site.

Suzi Monti

ESOL Curriculum Developer and Instructional Specialist

The Community College of Baltimore County

Center for Adult and Family Literacy

Baltimore, MD

 

Hi,

Shifts in ESOL population can also be attributed to change in employment opportunities and contraction in the availability of affordable housing. Massachusetts is suffering like many places in the country with a dearth of affordable housing. Rents have increased exponentially and the city where my program is located has increased enforcement on the number of people who can legally live in an apartment building. People then shift to another low rent area. Then they try to return to school. I only see this issue increasing as little is being done outside of community organizations to address this issue.

Toni Borge

Toni F. Borge

Adult Education & Transitions Program Director
Bunker Hill Community College
Chelsea Campus
Chelsea, MA

 

Suzi, given that much mobility in your students, I'm curious about your curriculum. Do you have a reasonably set curriculum that is consistent across sites? If that were the case, the movement would have fewer implications for student learning gains.

And do you "move" the student records and hours in class forward from one site to another internally?

The beauty of our on-line data bases in Pennsylvania is that students can't be duplicated in the system. Even if a student moves to a different program rather than a different site, his record is in edata and the new program and old one share the student equally. Both programs can put in hours, but only one can put in assessment information. The program in which the student is currently active is usually the "primary" program.

There are, frankly, some students who, because they are so motivated to learn English, are active in more than one program at the same time. It doesn't matter which agency does the testing--we usually use the best results for the official record. If our data base manager sees that we have better results than the primary agency, it benefits both of us to use that data. We might have the caught the student on a better day or our test might have more appropriate for that particular student.

Karen Mundie

Associate Director

Greater Pittsburgh Literacy Council

Pittsburgh, PA

 

Karen,

In response to your question about our curriculum, we do use the same materials (core books) across classes at the same level and while we have/encourage great flexibility to teach in response to student needs there is commonality provided by a framework of target skills or strategies to cover per level/per semester. This does facilitate the learning process if a student transfers within a semester or even if the student shows up the next semester.

As far as tracking, within our program and within a fiscal year we do pick up the students where they turn up assuming the identifiers are consistent. We also plan "sister sites" to encourage multiple enrollment to increase intensity/contact hours for students who desire it. We have the same issues with multiple test results in those cases but the information system we use seems to successfully handle that.

You mentioned students accessing more than one program and being able to track that. I am not sure if our statewide system in Maryland is able to do that (perhaps someone on the list can respond to that?). The issue of tracking can even become problematic within our program. Because of the lack of usual identifiers such as SS#s we have issues due to the use of multiple names or varied arrangement of names/surnames, reversals on dates of birth, etc. as can be common with ESOL students. We assign a number to each student but it is challenging to determine if we are dealing with the same or a different student sometimes. It can be detective work to try to sort it out. It would be interesting to know how often this impacts tracking.

Suzi

 

Larry,

Could you tell us more about the ESL research on percentage of possible time attended? This is a new idea to me. Does it reflect greater intensity as opposed to lesser intensity for a longer duration - or do you think something else is going on? If your research is correct, there are certainly implications for how we structure instructional segments.

Sandy Strunk

Program Director for Community Education

Lancaster-Lebanon Intermediate Unit 13

Lancaster, PA

 

In light of Larry's comments, I would like to share a program quality standard that we have been using in Connecticut. We call it the "utilization rate" or "% of available instruction used". It is the percent of available class hours utilized by each student in the class. We aggregate this measure at the class level and the program level.

We have experienced some challenges with this measure though. We are able to account for late starters by pro-rating the remaining available hours based on that late start date, but it gets unwieldy to also account for students who exit early. This measure works well for classes offered on a set schedule but can be problematic for learning labs where the lab might be open for say 25 hours a week but a student is not expected to be there for the entire 25 hours; this could result in a low utilization rate though the students might be attending say 10 hours/week. At the other extreme, some classes/programs may show high utilization rates but may be offering classes that run for only 40 hours in a semester. I find that combining this utilization rate with an absolute average of hours attended gives a better picture of the participation and persistence of learners within a program.

I too would like to hear Larry's thoughts on Sandy's question. In my personal experience after looking at tons of data over the past 2-3 years from a variety of programs, I would expect that "intensity" (more instructional hours in a week) more than "duration" (more calendar days between class start and end dates) might result in greater learner attendance. For example, it is probably more likely that 20 ESL students will attend 100 hours each on average during a fiscal year if they are offered a class that runs 12 hours a week for 12 weeks than if they are offered a class that runs 4 hours a week for 36 weeks.

Another element that we are beginning to track more closely is retention across fiscal years. We know that many students don't achieve their goals within one fiscal year. Therefore, we are using our data system to track and report on students who are new in the fiscal as well as those who might be returning to that program from a prior fiscal year.

What about recruitment? Do any programs/states look at the students served over the past six/seven years and compare that to say Census 2000?

Ajit

Ajit Gopalakrishnan

Education Consultant

Connecticut Department of Education

Middletown, CT

 

Sandy,

A few years ago I did a study on adult ESL literacy students that focused primarily on instruction. But we also looked at retention. We found that the proportion of time an ESL literacy student attended (measure by hours attended over total hours class was scheduled) had a positive effect on oral English skills and reading comprehension, all else being equal (using a complex statistical model).

The possible reasons for this effect are intriguing and need more research. Because this measure showed an effect regardless of how many hours the student actually attended (or how many hours per week a student attended), my interpretation is that this measure is a measure of motivation (although I have no data or other information to check this). In other words, the student who continues to attend over time, despite all of the other competing demands on time, is one that is more motivated. This motivation helps learning.

I think if true, it does have implications for structuring instructional segments.

Larry Condelli

 

Good morning:

How interesting to hear from a range of institutions---I get so focused on my programs that it is interesting to hear from other types of organizations and structures.

For my students, motivation isn't an issue at all. We have a lengthy wait list - depending on the level, students could wait as little as 6 months or as long as 2 years to get into our program. Consequently, they are pretty thrilled to finally be there, and eager to participate.

For my population, (adult ESOL learners---a large majority in the 25-44 age range) the issue is juggling demands on their time. The majority work at least one job (many work 2 or more) and have children in school. Our classes are in the evening, since over 90% of the population we serve work during the day. Many rush directly from work to class, and might be late due to mandatory overtime, or a family need that requires attention prior to attending class. Given the lack of access to adequate preventative health care that many of our students face, there are ongoing health problems for many. Add this to the occasional trip back to their native country for a death in the family, or some other type of family emergency, and frankly I'm amazed that they are able to maintain such a strong commitment to their studies.

Some of our research and data analysis have uncovered these issues-- still it remains quite a challenge to respond to these problems. We initially adjusted our program plan and schedule to allow for longer breaks during the holidays, when many students wish to return to their native countries. We also incorporate school vacations in our planning. When an individual student starts to have a problem, we meet with him or her to see how we can help. It doesn't always work, but sometimes we are able to communicate with employers, and that's been helpful for many students. Sometimes, we offer students a "leave of absence" to deal with pressing personal matters, and invite them to return when things are more settled. All of these strategies have evolved over years of looking at attendance/retention data, and discussions with focus groups. These strategies have had a positive impact, and students appreciate our responsiveness to their needs.

The first year that we implemented our "managed enrollment" (vs. open entry/open exit) model, our retention increased from 74% to 90%. Our attendance has increased from 68% to now over 82%. We all know how critical it is to keep students long enough for them to reach their goals…

Which brings me to my final point. We all serve so many masters---NRS, our funders (in my case DOE); our parent organizations (for me a community college) and we are constantly looking at data to justify our existence and demonstrate our effectiveness. Let's be realistic, if we want to retain our funding, we have to show results-which is as it should be.

For us; however, when we look at our data, it is always with an eye to how we can better serve our students and respond to their needs. The difference is subtle, but powerful. It's a lot easier to get staff and students on board with planning and change when they can see a direct result for our students than to respond to a bunch of charts and mandates from "higher ups".

Luanne Teller

 

Luanne,

This is consistent with some research I've done with Forrest Chisman and several people at adult ESOL programs in community colleges. Managed enrollment not only increases attendance and learning gains, it also increases retention and enrollment in the next ESOL level.

Jodi Crandall

 

Hi Jodi:

Hmmm! I honestly never thought to look at how students are retained and progress during the following year/s...great thought! I am going to have to go back and look at this data to see if I can find any trends...thanks for the idea!

Luanne Teller

 

Tracking Learner Outcomes: One-Year versus Multi-Year Reporting Periods

Larry, and others,

Tina Luffman, and many other program administrators have observed patterns like this that suggest that a one-year time frame, a funding year, may not be the best unit of time in which to measure learner gains, except for those who are doing basic skills brush-up or who have very short-term goals like preparing for a drivers license test. I wonder if there is a possibility that the NRS might be adjusted, perhaps in a pilot at first, so that a longer period of learning, say three years, might be used to demonstrate learner gains. Of course, there would need to be intermediate measures, but accountability -- for programs and states -- might be based on a longer period of time.

It seems to me that the one-year time frame within to measure learning gains or goals accomplished comes not from K-12 or higher education, but rather Congressional expectations for job skills training. Would you agree?

Also I wonder if you or others have some examples of programs that track and report learner outcomes over several years, and use the data for program improvement.

David J. Rosen

 

I wonder if there is enough data to even show that adult basic and ESL students stay with a program in large enough numbers to track over a longer period? The conventional wisdom of those outside of the adult basic skills network is that basic skills programs have little impact because students do not stay long to make a difference. Do we have any evidence that shows we work with the same students more than one year and that we work with a high enough number of students more than one year to make a significant difference?

Dan Wann

Professional Development Consultant

IN Adult Education Professional Development Project

 

Dan, I know that's the perception, but I also know that we roll over about half of our students from one year to the next. . . and some of those students had rolled over the previous year as well. We've actually had to put a three year limit on some students (especially ESL).

I'm having our data person look this up as well as we can. Unfortunately, our data tends to be divided, as David indicated, in discrete yearly "lumps." We can get the information, but it's time-consuming because the data bases are designed for accountability over a contract year.

We certainly do have a lot of students who come in with short term goals and leave when these are accomplished. We also have a lot of stop out students, who have to put goals on the back burner while they work out other issues. I think, however, we do keep a significant number of students over time. I think for my own little research project, I'm going to investigate gains over multiple years.

Karen Mundie

Associate Director

Greater Pittsburgh Literacy Council

Pittsburgh, PA

 

The Longitudinal Study of Adult Learning has been following a target population of ABE learners over a long period of time. It's finding exactly the pattern that others have been describing - many adults participate in programs over a series of "episodes" which often span multiple years (NRS accounting periods). When we've presented these data, we've suggested that NRS will not capture all of the impact that programs have on learning in part because of its short-term focus for measuring both participation and outcomes. I wonder if states could get waivers on a pilot basis to experiment with longer reporting periods as David Rosen suggested.

Steve Reder

 

Last year sometime, I remember hearing Ajit talk about how CT was tracking students over a period of years. They had some very interesting information about the percent of students who did and did not come back the second year. I hope he will share it here.

Kathy Olson

Training Specialist

 

Our oldest few:

Intake 1997 Grade level equivalent 2, now 11

Intake 1998 GLE K, now 7

Intake 1998 GLE 2, now 7

Intake 1998 GLE 3, now 11

Intake 2000 GLE K, now 4

And all of them have been "stop-out" students, with very uneven progress, at times even going down on the CASAS scale. They tend not to improve as quickly as NRS likes, but they do improve.

There is another set of students that need to continue coming just to keep whatever skills they have. They can be really hard on NRS statistics!

Mary G. Beheler

Tri-State Literacy

Huntington, WV

 

We too see this happening. Just now we posted the official GED test scores (passing!) for a student who started in June 2003 (65 hours) and stopped out in July 2003. She came back March 2004 and was in and out through March 2005. This March and April (2007!) she took and passed all of the official GED tests. So this is a success story! BUT, we get no credit as far as NRS is concerned because the student is not enrolled this program year. YES, we support efforts to report results over multi-year periods.

Thanks.

Barbara Arguedas

Santa Fe Community College Adult Basic Education

 

Hi Dan, David, Kathy, and others,

I think that many states do have longitudinal data systems where a single student identifier is used across providers and across fiscal years. In those cases, states should be able to look at persistence and success rates across multiple years. We have been studying this issue for some months and find that the number (percent) of learners who return to adult education in a future fiscal year is fewer than we had expected. We are also beginning to notice that this return rate of non-graduates varies significantly (between 35% and 65%) among the three secondary completion options available in CT: the GED Preparation, the Adult High School Credit Diploma Program and the National External Diploma Program. Typically, the GED preparation reflects the lowest return rates. We are now beginning to look at the success rates of those students who persist for more than one fiscal year.

With regard to David's initial question about tracking learning gains across fiscal years, in the adult education system, many learners begin in January/February or even later in a fiscal year but are held to same expectation of having to demonstrating a learning gain by June 30. These learners have significantly less time within which to achieve that learning gain as compared to those who started in the fall. As an example, 42% of students who started ESL in Connecticut by October 2005 completed an NRS level by June 30, 2006 as compared to 33% among those who started after October.

By contrast, the U.S. Department of Labor's implementation of NRS as part of its Common Measures policy for out-of-school youth allows the persistent learner, 12 months from the start date before being considered in calculations relative to learning gains, even if that 12-month period spans two fiscal years.

Ajit Gopalakrishnan

Connecticut Department of Education

Middletown, CT

 

We have been fortunate with continuity of our students. Our adult ESL program covers all NRS levels, and we have had many students who began in the literacy or low beginner level and progressed through the years through the low advanced level. We are allowed to keep them if they show one EFL gain in 450 hours. Although we try to get follow-up information, we don't always learn why every student leaves, but frequently we receive the information on students who leave our program and go on to higher education programs. Often these students work, but are able to schedule classes around their job times or vice versa.

Jo Pamment

Director Adult Ed. ESL

Haslett Public Schools

East Lansing, Michigan

 

I'd like to respond to David Rosen's question regarding programs which track student gains beyond the bounds of the fiscal year. At my program, students stay for an average of 15 months, and they are permitted to stay for five years if they wish (and many of them do), so we track them from start to finish. We are a small program, serving about 250 students a year, so we take a fairly low-tech approach. After pretesting with BEST "Plus" and BEST Literacy, we plot their scores on line graphs (two separate graphs - one for each of the two tests) and continue to plot their scores for each subsequent test. As time goes by, the lines representing a student's progress slope gradually upward across the pages. This is a graphic that the students can easily understand, and it enables us to see which skill areas require emphasis for individual students. Having a complete picture of a student's testing history also allows us to show students that an occasional disappointing test score may be an aberration from a generally upward trend.

Kate Diggins

Director of Adult Education

Guadalupe Schools

 

Kate,

This is a good example of the type of analyses I suggested in my previous post on this topic. Thanks, Kate!

Larry Condelli

 

Kate--

How do you account for the long retention rate? Other programs report this as a difficulty.

Thanks.

Andrea Wilder

 

Well, I think I would attribute it to quite a bit of individual attention. Our students are placed in small groups and work with volunteer mentor-tutors. The groups are maintained and supported by the professional staff; so this means that there's a web of supportive relationships to help students with problems that may lead to "stop-out". If a student is absent twice without calling in, we call to find out what's happening. We try, too, to assist in overcoming the most typical barriers to education, so we have childcare and a small shuttle bus that can do door-to-door service to 15 people, if they live within a couple miles of the school. To be honest, I've never been sure that our retention was good, because I've never had data to compare it with. Does anyone know what a typical retention rate is for a non-intensive adult ESL program?

Kate Diggins

Director of Adult Education

Guadalupe Schools

 

Dan, Karen, Steve and David,

You all have raised the issue of changing the NRS reporting period from one year to multiple years. While this is off the topic of using data, I will give a quick response.

First, the mandate is to have an annual reporting system so some information is required each year top report to Congress. Beyond this, this topic has come up and been considered multiple times and there is some flexibility with ED to make some changes to the reporting period, if there is a compelling reason that can be demonstrated. Our analyses of several states' data (not NRS reported data but individual student data from over several years), however, including some very large states, is that there are proportionally very few students who continue year to year (on the order of 5 percent or less in some states) and it does not appear at this time that it would make a difference in performance data at the national level, as Dan Wann suggested.

NRS is a national system so with some local programs (such as Karen's) or other states, there may be large numbers of students who continue year to year and in those instances it might be advisable to look at and report multi-year data. To bring us back to our topic of using data, this would be a good analysis a state or local program to pursue-- to look at returning and continuing students and see how they differ in outcomes and other factors from students who stay a short time. We also can rely on research, such as Steve Reder's study to look at long-term relationships, which if compelling, could result in a change to the reporting period in the future.

Larry Condelli

 

Thanks for those points, Larry. Besides giving us a broader view of more complex patterns of participation, multi-year data frames will probably do a better job at revealing program impacts on longer term outcomes such as postsecondary education and employment. It's good to hear that there may be flexibility within ED for experimentation such as this.

Steve Reder

 

Larry and Steve,

I agree. There are important gains that are missed when one only looks at data within a year. Longitudinal data would give us a much better view of participant progress, both for those programs in which a significant number of adults continue more than one term, as well as those in which participants stop out and return.

Jodi Crandall

 

Dan, you have raised some critically important questions. NRS data indicates that only about 36% of ESL students "complete a level" each year. This is cause for concern, because the same data show that the vast majority of ESL students are at the lowest levels of proficiency and have low levels of education in their native countries. However, the NRS data is not definitive for a number of reasons -- such as low rates of re-test in many programs, the use of tests that do not measure the full range of English language skills, and the fact that data is reported only for a single year (students may persist in programs long enough to achieve much larger learning gains).

As a first step toward finding out more about the learning gains and persistence of ESL students, Jodi Crandall and I worked with the faculty and staff at 5 highly regarded community college programs to use student record data as a means of determining both learning gains and persistence rates. At several of the colleges we were able to track the learning gains and persistence of students for as long as seven years. At most of the colleges, the measures of learning gains used was completion of one or more additional levels AS THE COLLEGE DEFINED THE LEVELS. Both the definition of levels and the standards of completion took account of test scores (of the sort reported to the NRS), but they also took account of other measures of student achievement (including proficiency in all core ESL skills).

Needless to say, our findings were fairly complex and cannot be adequately set forth here. In summary, however, we found that at most 30% of students persist for 2-3 college terms and complete more than 2-3 levels over a seven year period. More than 40-50% of students do not complete a level or complete only a single level at any time over a seven year period. Although we could not be sure, it appears that low level students were more likely to persist than higher level students. About 10-15% of adult education ESL students enrolled in credit ESL at these colleges, and the number who eventually enrolled in academic credit courses was in the single digits.

We also found that all the colleges we examined employ strategies that significantly improve the rate of learning gains and retention. Among these were high intensity/managed enrollment classes (more than 3-6 hours per week), strategies to encourage learning outside the classroom, appropriate uses of technology for instruction, co-enrollment of adult education ESL students in vocational programs taught in English, curricular designs that insure instruction is relevant to the interests of students (such as Freirian approaches), enriched guidance/counseling/support services, setting high expectations, and VESL programs. Unfortunately, only small numbers of students have access to most of these strategies at most colleges, because they are far more expensive on a per student basis than is standard ESL instruction. Conversely, it appears that large numbers of students would like to make the commitment to enhanced programs, if they were available.

The results of our research were published by the Council for the Advancement of Adult Literacy (under the auspices of which the research was conducted) in February as the report " Passing the Torch: Strategies for Innovation in Community College ESL." This is available at the CAAL website: www.caalusa.org. CAAL will be publishing more of the data we gathered later this spring.

Among the "take away" messages we gathered from our work were: 1) The use of longitudinal (multi-year) data and holistic assessments of learning gains are essential for understanding and improving the effectiveness of ESL programs. In many programs it is feasible to gather and use longitudinal data in this way, but few programs do so due to a variety of perceived constraints and/or a lack of support for data analysis by their host institutions. 2) Research can be very helpful in program improvement, but it requires a substantial commitment on the part of programs to gather relevant data and tease out its lessons on an on-going basis. Programs should receive far more support for this. 3) It is possible to greatly improve ESL program outcomes using a variety of strategies, but these require a larger investment in instruction per student -- an investment that we believe is well worth the cost. 4) Numbers do not speak for themselves. For example, low rates of learning gains must be read in the context of the goals that both students and programs set for ESL instruction. It may be that some portion of students legitimately wish to use ESL programs as an initial platform to learn SOME English, and that their learning gains after separating from programs are substantial. Too little is known about this. Conversely, we found that the more students learn, the more ambitious their learning goals become. Because numbers do not speak for themselves, it is all the more important for individual programs and state agencies to invest in the use of research for program improvement and to ACTUALLY USE IT for these purposes. Too often over-burdened ESL faculty and staff consider research an after-thought. They need the time, encouragement, resources, and training to development "continuous program improvement" models to their work.

Forrest Chisman

Council for the Advancement of Adult Literacy

 

Parallel Experiences at K-12 Level

Hopefully everyone participating in this discussion regarding adult literacy is aware that almost everything you are saying applies to the results for students in school as well. Coming from a public school background you could always see the effect that high mobility rates had on overall student results. Schools with those highest rates almost always struggled to meet standard on state measures connected to NCLB. This was the case with overall populations as well as various subgroups that were tested. The same applies to student retention, or for that matter attendance. As a rule, student who attended regularly achieved much higher grades than students whose attendance was far less consistent. This then followed suit with results on standardized testing and ultimately on graduation rates.

The entire education community, whether it is involved with adult literacy, or the traditional K-12 curriculum is faced with the same thing. The key to increasing literacy and to closing achievement gaps starts with getting and retaining students.

Fred Lowenbach

 

Fred,

I certainly agree that K-12 education has retention issues related to mobility; however, the difference as I see it is twofold. First of all, most schools run on a 180 cycle and children are expected to attend every day that they're healthy and reside in the district. Secondly, while an individual teacher may structure his/her instructional segments, most students don't have the ability to choose whether or not to attend a given session. I suspect that the attendance issue in K-12 - at least up until 9th or 10th grade - is related to the family's mobility rather than to student motivation.

Most adult education programs in Pennsylvania, have an average attendance of 60 to 100 hours per year. Mobility is certainly a factor, but in my experience most adults "stop out" for many reasons other than mobility. As a program director, I have tried various combinations of intensity and duration. One of the ways we've worked on retention is to have each teacher create a scattergram of his/her retention patterns. One axis of the graph is the number of hours available, the other axis is the duration of the class. What we found is that different patterns emerge on the scattergram with different teachers. We then work with teachers individually to develop improvement strategies based on their individual patterns. For example, a teacher with students who cluster in the low intensity/low duration quadrant would use very different retention strategies than a teacher who has students clustering in the low intensity/high duration quadrant or a teacher whose scattergram is evenly distributed across the four quadrants. Ultimately, the teacher's goal is to see his/her students clustering in the high intensity, high duration quadrant. Our experience suggests that working with teachers on their scattergrams and retention strategies has a positive impact on student retention.

If Larry's research can be replicated, it speaks to a couple of very important issues for our field. Open entry/open exit is one of them. The second is the length of the instructional segment, regardless of intensity. Our program has operated under the assumption that low intensity classes need to be longer in duration. For example, our night classes tend to run in 14 week segments whereas our daytime, high intensity classes tend to run about 7 weeks. This research certainly challenges this assumption.

Sandy Strunk

Program Director for Community Education

Lancaster-Lebanon Intermediate Unit 13

Lancaster, PA

 

Yes, when parents move so do their children. The public school system here has a very low graduation rate and a big factor is the number of students who haven't dropped out but have moved out. And not surprising, their performance on meeting standards is not high no matter how hard the teachers in the system work to address the needs of the students.

Toni F. Borge

Adult Education & Transitions Program Director

Bunker Hill Community College

Chelsea Campus

Chelsea, MA

 

Using Student Goals as Data

Hi everyone,

Wow, what a super discussion! So rich and full of great ideas, interesting comments, excellent questions, and thoughtful challenges. I usually contribute more myself but I'm just reading and soaking it in at this point. I am cutting and pasting the discussion into a user-friendly document, which I will make available once our Guest Panel concludes tomorrow.

We are really covering a lot of ground here! Just curious (because it is a focus on mine within the realm of accountability): a number of folks have discussed issues of retention and the types of strategies that they employ in their programming, but I don't think that anyone has mentioned if they use student-stated goals to track retention, trends in learning or program offerings, etc. Perhaps the use of student goals is more easily applicable at the classroom/teaching level (not sure!), but I just wanted to know if anyone out there makes programmatic decisions based in part on the reasons why students come to your programs. I am not referring to learning gains (reading, writing, math, ESOL, etc), but rather to students' ultimate purposes for attending, like getting a better job, helping kids with homework, buying a home, becoming a citizen, etc.

Thoughts on this?

Thanks!

Marie Cora

 

Hello all!

I've been trying to keep up with this discussion as best I can.

As a teacher, I use the test data given to me by my program along with my own needs assessments (what the students know and what they want to know) to plan my lessons. Typically my students want to get a better job, help kids with school work, and be able to live in the US. The test data shows their strengths and weakness. By combining the this information, I can design lessons that target the language used in these everyday situations: reading an apartment ad, answering and asking questions in an interview, making a doctor's appointment over the phone, etc.

Bryan Woerner

 

Hi Marie:

Great point---

As I just stated in my last email, our primary focus is how the information we derive from our data can benefit our students.

We recently completed a two year study on learner goals, and completely revised our goal setting process.

In the beginning of the year, students select at least 3 goals that they want to work on. (We use pictures for beginner ESOL students and more advanced students who can help translate.) The process is somewhat driven by the DOE mandates, but we pride ourselves in having an entire page of "other" goals that students can identify. While the DOE requirements provide the basic structure for our goal setting process, we are not limited by it. Each month, students answer three questions in their blue books around the goals they have selected:

  1. What they did in the past month towards meeting their goals
  2. What they plan to do in the upcoming month to meet their goals
  3. What we can do to assist them.

Instructors collect the goals books each month and review them. We work as a team to assist students where we can. When a student meets a goal, the instructor notifies the office staff (there is a paper trail) so we can enter it into the system (or not depending on the goal and student authorization to release the information). This allows us to quantify our learner's progress towards their goals in a data base that yields useful information.

In the beginning of the year, when instructors collect the goals books for the first time, they review the information around what goals students have set. This allows them to integrate goals into the curriculum and instruction, and provides administrative staff with information to help planning for the year. For example, many students this year have selected health-related goals, so we have the Director of Interpreter Services from our local hospital coming to speak with students around patient rights and access to health care resources in the community.

We also have conversation classes that allow us to focus on clusters of goals. For example, we might do a series on Citizenship if we have several students with that goal.

In many ways, helping our students articulate and achieve their goals relates directly to the retention and attendance issue. The best way to retain students is when they see a direct result from their participation that is relevant to their daily lives.

At the end of the year, we collate all the information around student goal achievement. We have an annual year-end student achievement ceremony, and on each table we place a tent card. On the outside it says simply "Did you know that…" and on the inside, we include information about student goal achievements. For example, it might say that 6 students became American citizens, or 5 students bought new homes, or 12 students got raises.

This helps communicate the impact of our program to our community partners and funders in a real and meaningful way.

This is a great example of how we are able to serve our funders (by quantifying data to meet the state accountability standard around student goal setting) in a way that truly focuses on meeting our students' needs.

Luanne Teller

 

When a student comes to an adult ed program, it's usually "to learn English," or "to learn to read." It can take a lot of probing to elicit more specific reasons from students: Where/when do you need to speak English, or to speak it better, with whom? What do you do now? Is that working? So goal-setting can and should be an important part of an intake interview, and, as was mentioned, an ongoing component of a classroom situation to track progress and benchmarks, especially since students can plateau and take a long time to "progress." I like the question about whether specific goals are related to better retention, or the sense of community in the classsroom, extrinsic vs. intrinsic motivation, "tangible" progress etc. There would be lots of ways, perhaps, to obtain data on these elements.

Thanks for the discussion. I think it ranges far wider than just ESOL students to the differentiated classroom in general, as well as questions of attrition/retention, which, as has been pointed out, can be systemic, and not program-related at all, but to the multiple barriers adults face.

Bonnie Odiorne, Director

Writing Center, Post University

 

Hi,

I've been following along. Thank you all for the great comments.

We do ask for student goals on entry into the program but the goal is broad in order to meet the reporting needs to the State.
However the teachers are asked to do a more specific student survey in the classroom at the beginning of the semester and a follow-up one at the end of the semester to learn the more specific goals of the students and whether they think their needs are met.

Questions on the survey are simple and may be:

Where do you want to use your English?

Name three places where you use English

Name three places where you need to use English.

Where are you afraid to speak English?

The teacher's can adjust the questions to meet their perceived classroom needs.

There is also a Student Self-Evaluation where students rate their improvement on speaking, understanding, writing, reading, grammar, pronunciation and any other teaching focus areas . They can say: improved a lot, improved a little, did not improve, got worse and explain their comments. They are also asked to rate questions on a scale of 1 to 5, such as: I come to class regularly. I come to class on time. I do my homework. Homework helps me learn English. I understand my teacher. I understand my classmates when they speak to me. I speak English in class. I speak English outside of class. I practice what I learn in class every day. The final questions is: My goal is to speak English 15, 30, 45, 60, 60+ minutes a day.

This self-evaluation helps the students to become more aware of their active role in learning English.

Unfortunately this is all for the awareness of the teacher and the student which helps improve classroom teaching and learning, but can 't be reported as an assessment tool.

Jo Pamment

Director Adult Ed. ESL

Haslett Public Schools

East Lansing, Michigan

 

Hi,

In Massachusetts we just developed a goals cube in Cognos, our third party reporting tool. This cube allows teachers/directors to look at class level data as well as site level data so teachers can review the goals set by their students and incorporate the goals into the curriculum. Teachers have requested this information to help them better meet the needs of their students.

Donna Cornellier

 

Issues with the TABE

I don't know if anyone has raised this question but one of the things I know myself and the testing coordinator at my school are concerned with is the fact that we don't get to count gains made if a person goes from the M level TABE to the D level TABE. We use versions 9 and 10. I had a student who scored around 9.0 or so on the M level Reading which we question in and of itself in terms of validity. He's at the point where we're supposed to retest him. He'd tested on the D level in math the first time and he's been regularly attending and regulary working on his goals. He went from the 9.0 on the M level test which is moderately hard in reading to a 7.7 on the D level test which is difficult. Because he DROPPED in terms of grade level, it's not counted as a level completion...even though he actually went from M to D...which one would think would also qualify as a level.

Has anyone else had this happen and if so, what are your suggestions.

Regards,

Katrina Hinson

 

Katrina,

I am not sure what state you are from but here in New York, we have just this past year implemented a new state policy for administration of the TABE and a series of validity tables as well. (I have attached both for you to take a look at) Larry may remember that it was our state data that prompted us to change the way our programs were using the TABE. In some cases, based on the score ranges, teachers were actually prohibiting their students from showing enough gain to place them in the next EFL under NRS guidelines by choosing an invalid level of the TABE. Scores on either the high or low end of each range of scores on the TABE are unreliable because of the large standard error of measurement associated with the extreme ends of any bell curve. This means that as you suspected, the high and low scores on each of the tests are less likely to be a true indication of the student's ability. Retesting with a higher or lower level of the test is recommended for these cases. It was evident to us, based on our state data, that test administrators either did not understand that concept or had differing opinion as to when a test was outside the acceptable range and consequentially when to retest.

We employed the methodology developed by the University of Massachusetts for the Massachusetts Department of Education to establish acceptable ranges for the Reading, Mathematics Computation and Applied Mathematics sections of the TABE 7 & 8 and TABE 9 & 10. The policy, along with the scoring tables, were then integrated into our data management system such that invalid scores may not even be entered into the data system. If students score outside the valid ranges, they must be retested on an appropriate version of the TABE.

Strategy is still advised when using these scoring tables. For example, based on our validity tables, if a student scores a 7.2 GE reading level on an M TABE, they are within the valid range however if a level M is administered for the post test to this same student, the very highest that student may achieve and still fall within the valid range is a 7.7 GE. This score will not be enough to show education gain. This student must be given a level D test to open up the possibility of achieving a score high enough to evidence gain. As long as the administration of the TABE levels is contiguous, the scores are valid and may be used under NRS guidelines. (So moving from an M to a D is acceptable)

As you can imagine Katrina, a comprehensive staff development was built to accommodate all this information and we rolled it out to all programs through a train the trainer model. I am pleased to say that our state's performance in the area of educational gain has increased significantly as a result of this work. I hope this is useful to you and your testing coordinator.

Rosemary I. Matt

NRS Liaison for NYS

Literacy Assistance Center

New Hartford, NY

 

Katrina,

This is the very reason it's better to use scale scores rather than grade levels to place and promote students. The scale score accounts for varying difficulty levels of the TABE and is the only metric that can accurately measure a student's growth over time.

Grade levels are a gross measure that lose their meaning as you move from level of the TABE to the next. So, for example, if a student earns a 5.5 on the M level and a different student earns a 5.5 on the D level, they ostensibly are at the same grade level, but the reality is that the student who took the D level got a 5.5 on a test that had more difficult content. As a result, you cannot realistically equate the two.

Hope this helps!

Mario Zuniga

Adult Education Consultant

NRS, Assessment, and Accountability

Division of Workforce Education

Florida Department of Education

 

Hi, Katrina, we found the same problem with the TABE, and we received help from the Center for Educational Assessment at University of Massachusetts, Amherst (from Prof. Steve Sireci and Dr. April Zenisky Laguilles), who formulated a chart for testers to use. It has really helped the problem! You can find it in our FY07 assessment manual for using the TABE, on pp. 10-11 at http://www.doe.mass.edu/acls/assessment/news/TABEpolicy.doc

Jane Schwerdtfeger

Curriculum and Assessment Developer

Adult and Community Services

Massachusetts Department of Education

PS: the discussion has been terrific this week! Thanks to all who participated!

 

Providing Staff Development and Training

I'm fascinated by this discussion because most of us who do a lot of standardized testing have, indeed, learned by experience that the whole testing thing is not quite as straightforward as one might think, that there is strategy involved if a program is to put its "best foot forward" in a completely legitimate way--not to mention a good deal of staff training to achieve any kind of consistency.

Karen Mundie

Associate Director

Greater Pittsburgh Literacy Council

Pittsburgh, PA

 

Karen,

The challenge of rolling out the staff development for these assessment pieces is truly daunting. In New York, every staff member who is to administer the TABE must attend the state training. Through our data management system, we track all attendance and certifications for state mandated training. We have a similar strand running for the BEST Plus as well. Although it is a challenge to get everyone trained, for those programs who pushed their staff through the first round, they are already seeing the difference in their performance data for educational gain. You are absolutely correct in feeling there is not only strategy involved but a better understanding of the nuances of the NRS system can assist a program in demonstrating through their data reporting, a more accurate picture of the service they provide and the impact it has on their students.

Rosemary

Rosemary I. Matt

NRS Liaison for NYS

Literacy Assistance Center

New Hartford, NY

 

In Pennsylvania we also have a mandated Assessment training. At least one person from every program must have attended the training.

However, the truth is, many people are still giving assessments who have not attended training.

At my own agency, everyone who gives the TABE or BEST or BEST Plus has attended training. Basically, I think we do very well with the TABE. We are lucky in having an education specialist who is an assessment trainer and who oversees our testing procedures in general.

I think that the Best Plus is a problem for many programs because expertise in giving the test requires a good deal of practice. We have limited the number of staff who give the BEST Plus to about nine staff members and try to have the pre and post tests given by the same person, but I'm still concerned that there is still too much room for scoring anomalies and differences in judgment.

I hear from a number of programs that this is problem.

Karen Mundie

 

Good Ideas from High Performing States

We are getting some good ideas from this discussion. We wonder if there is a way to learn from high performing states. Who are they? What do they do in terms of using data? Have some states discovered the data that makes a difference in their final NRS reports? What are the factors that truly impact the final data?

Maybe we don't have to reinvent. Thanks.

Barbara Arguedas

Santa Fe Community College ABE

 

Hi,

In Massachusetts we migrated our desk review system over to a web based reporting system (Cognos) in the last few years. The system allows programs ongoing access to view their progress throughout the year. At the end of the fiscal year programs can view their scores. The six performance standards have been informed by statistical analysis of multiple years of data, a goals pilot, and input from the field of ABE practitioners in multiple task groups and workshops at statewide conferences. Each of the standards is set at or near the Massachusetts state average for performance in the following
areas: attendance, average attended hours, pre and post testing percentage, learner gains, setting and meeting student goals, and NRS Table 4 educational functioning level completion. The performance standards, as one part of a larger system of accountability, encourage continuous improvement, effective program administration, and positive student outcomes. Our intention was to create a simple and understandable system that local programs would buy into and use. The state also wanted the system to be seen as helping with program improvement not as punitive. We assign points to a relatively small number of measures because those are the measures we want to emphasize and because of the difficulty of quantifying some of the other measures. The fact that local programs have regular access to the Cognos system has helped with buy in. This access allows locals to see the same data the state sees, increasing transparency, and allowing programs to follow their progress throughout the year. This has helped locals to see the benefits of the desk review system.

Donna Cornellier

SMARTT ABE Project Manager

Adult and Community Learning Services

Massachusetts Department of Education

Malden, MA




Please note: We do not control and cannot guarantee the relevance, timeliness, or accuracy of the materials provided by other agencies or organizations via links off-site, nor do we endorse other agencies or organizations, their views, products or services.