Brief comments on what some subscribers thought of the content and materials that were put together for the discussion.
I actually think that the preparation materials were quite good and very well organized. While I haven't had time to look over everything, I am going to try to "catch up" at some point. My only suggestion is that we have even more than the lead time you gave us for this very important discussion. (Please don't think of this as a criticism. You gave us much more lead time than most and your "rollout" of this discussion is top notch.) Thanks for pulling all of this together. I really, really appreciate your taking the lead to help adult literacy practitioners have ongoing conversations about our discipline.
Melinda M. Hefner
Thanks so much for your feedback - and actually, I agree with you: I realize there are a lot of materials to review and I was disappointed (in myself) for not having gotten the announcement out about a week sooner.
But I'm glad to hear that the resources there are useful and interesting. I hope that folks take advantage of them.
I also found the listing of materials related to the content of the discussion on assessment to be very extensive. While I also haven't been able to thoroughly digest all that was provided, I do intend to refer to the materials as we proceed in the discussion and to use this material for references as I continue to pursue the use of assessments and data in our organizational decision process.
A discussion on how to determine what to focus on when starting to use data for program improvement. Suggestions include:
- developing a key or 'burning' question
- identifying areas of the program that need improvement
- identifying areas of the program that you would like to learn more about
- start by assessing constituent needs
- involving staff
Dear List Subscribers,
We would like to have more of a focus on data in our program, but it's so difficult to know where to start - we could work on so many different aspects of the program. Does the panel have recommendations of what to tackle first, second, etc? Or recommendations on how to decide what to do first?
In deciding where to start it is helpful to focus on a key question. Identify what area of your program you would like to improve or gather more information about. If you have an area of targeted improvement that would be a great place to start framing your key question. Another option would be to start by looking at your program strengths. Many times we view looking at data as finding what's wrong with our program. Which often is encountered by resistance. If we look at our strengths it is a great way to validate the work that is being done in our programs. Who doesn't like to hear what we are doing right? When forming your key question it is important to include all stakeholders, administration, students and staff.
As an administrator I like to approach our data from the viewpoint of what are we doing well and what can we do better? Focusing on continuing quality improvement is a win-win for everyone. When you look at data it helps you to achieve your goals by telling you that what you are doing is making a difference. Data can include both qualitative and quantitative measures. One of the things we now do in our weekly meetings is discuss successes in each classroom. This is important data and it validates the work teachers do every day in the classroom.
I recently went through the Leadership Excellence Academy with Kathi Polis and Lennox McClendon, and we started with various self-assessment options. The academy felt that beginning with assessing the felt needs of the students, teachers, administrators, and stakeholders was a great way to determine what else needed to be done in our program. Then we used these self-assessments to decide what data we needed to review for purposes of program improvement. I am not sure if this is what you were looking for, but this is what came to mind.
I agree with this, Tina. Using data for data's sake is not a way to start. You have to have a specific idea of what you want to know before you dig in. Ask your staff what their biggest burning question is. If they have different ideas, prioritize them together and then start looking for all the different kinds of places where you might find data to help you find the answer. State database? Your program's data? Attendance records?
Marie - do you have a quick access source for different kinds of surveys?
Hi Debbie and Tina,
I completely agree: it's not often that you just want to start with a pile of data. Debbie's idea of identifying a "burning question" is a perfect hook to go digging around for what data will tell you more about what you want to know.
Occasionally, programs do just start with some of their data, although almost always there is a particular focus. So for example, a program that is really trying hard to consistently and accurately keep records of attendance may wish to review the data on a monthly basis. They might be reviewing the data to check accuracy - but then they suddenly note that attendance is slipping for a particular age group - and so they investigate further to see what the real story is.
If you end up taking the Module 1 online course on Data Collection and Management, it discusses the various ways that data can inform, confirm, or dispute perceptions.
Debbie - you asked about sources of assessment surveys - and I do not have a particular source. We have developed 2 program self assessments for the initiative - one on Data Collection and Management, and one on Data Analysis for Program Decision-Making (both of which can be found at the announcement URL); in addition, we are now developing a program self assessment for the upcoming Module Communicating Success to Stakeholders. I will share this last one with you all when it's ready.
Tina shared a good resource - the program assessment from the Leadership Excellence Academy - Tina, is that resource free/available to the public?
Others: do you know of program assessments that you can share with the List? I can compile the suggestions and make them available to everyone
The Leadership Excellence Academy is a training that states request to offer to their administrators/coordinators. The state pays a fee for each person to participate. The course is free to the programs and participants, at least that is how it has been done in Arizona.
Hello Debbie, Tina and others,
I so agree with you both, and involving instructional staff in deciding what data to collect and review is the best way to get staff buy-in, which we're addressing on another thread of this discussion. This is a great bridge between the two questions.
Cathay O. Reta
The discussion focuses on how to get various stakeholders to accept or "buy in" using data, particularly program staff. Subscribers focused on getting staff to see data as worthwhile and integral to their successful work. A number of suggestions for involving staff were shared.
Subscribers discussed the importance of having accurate data, and learning to accept the story the data is telling, rather than trying to make the data tell a desired story. Subscribers shared some of the data items that they regularly collect. Subscribers noted that the initial challenges are far outweighed by the benefits of regularly using data - using your data is well worth the effort.
Finally, subscribers shared several resources that they consider helpful to the discussion.
How do you get staff to buy-in to a bigger focus on using data? My staff really see this as an extra layer of effort and work at this point. It's hard to get them to see that some initial work on this will eventually lead to better teaching and programming. Thanks.
At the risk of sounding snarky....which I sincerely do NOT intend, I think that most ABE teachers are underpaid and would see using data as worthwhile if they were not already doing so much for so little. If data will lead to a better program, will it also lead to a better salary?
Also, I think it is only fair to point out that data cannot necessarily inform all decisions. The target population is a moving target, groups come and go and what holds true for one group may not apply to another.
After all that, perhaps having instructors use their own data, from experiments they themselves have developed and conducted, may help get buy-in. What do they need? What do they want to know? How can they find out? What can they do with the information? If people are not used to dealing with data, then perhaps they need to see the power of the data first. Then they may feel more comfortable looking at data from an outside source.
Just some thoughts from a practitioner.
Jackie, and others,
These are important points. You don't sound snarky to me.
I wonder if the guests -- and if others here -- have examples of instructors who have done experiments in their classes, who have got good data, made decisions based on the data, and seen positive change as a result. We need power-of-data narratives, by or about teachers who feel that the data-for-decision-making process -- even if demanding and time consuming -- is worth it. Please share these stories with us.
David J. Rosen
Hi Jackie, thanks for this. You do not sound snarky at all - in fact, this is often what practitioners say - that they already have their hands way too full with not enough compensation, so why should they take on more.
I would like to note though, that data is information, and so it can come from any source, at any time, in any form. This does not necessarily mean that it is dependable or useful - that is for the person analyzing the data to determine. But for example, observing your students and taking notes on what you see is data. And so one of the goals of this initiative is to demonstrate this very concept to practitioners (and program directors) - that they have access to rich data constantly. But this data is not easily used unless you have a good way to capture and store it, effective ways to analyze it for what it means to you, and then a plan for using that data to improve some piece of your teaching (or programming).
Data from outside sources - say the NALS, or even test scores - is only one small piece of data that we can use.
Does this make sense?
You bring up a key issue which is a thread running through our three modules of the Performance Accountability training -- staff turnover. When staff is part-time and underpaid, as you point out, it is a challenge to make the changes we would like to; it seems to be burdensome to add more work for them. And then they are often not around long enough to really build their experience base within an organization. A few of our model programs have made concerted effort to hire full time staff.
Even so, programs have made great strides forward with involving staff in data collection and analysis and it is especially successful when the data is meaningful, when it helps instructors to see how they can improve their work, as you described. They may initially be brought into the process "because they are required to," but as they see "the power of data" then they truly buy-in to it. It is important for programs to frame data discussions in such a way that instructors will see it is to support them -- not punish them.
Cathay O. Reta
My experience has been that teachers are sometimes reluctant to use data because it seems so far removed from their day-to-day routines. We started by just feeding back to our teachers the information they had given us about their learners (test scores, attendance, goal attainment, progress, etc.). The first reaction was often "this isn't accurate" -- creating an opportunity for us to talk about the importance of validating data.
Once our data because more accurate, we began giving teachers their individual class information as well as the mean information for our program. This allowed teachers to compare their attendance, goal attainment, and learner gains to the average for our program. That led to many questions about why performance varied so much from class to class and fostered many opportunities for sharing best practices.
Finally, we asked our teachers to use their data to do written progress reports for our learners three times a year (we paid teachers for this time). We generated learner progress monitoring reports from our database, printed them with labels for teachers, and teachers added comments and observations prior to mailing each individual report. This not only made the data more useful for teachers, but it also engaged our learners with their individual data and underscored the correlation between regular attendance and learning gains.
I will also say that whether you're a teacher, student or administrator, looking at your own performance data is a humbling experience. A key for me has been to describe, not judge, and to talk about what I can do to try and change the data -- rather than assign blame because the data is not what I want it to be.
I also like the idea of having teachers generate their own inquiries and we try to support those inquiries through our professional development system. Unfortunately, as Jackie noted, part-time staff are often already stretched, so there's a limit on what they're able to take on in addition to their teaching.
Sandra J. Strunk
You sound as if you are doing a wonderful job. I especially like that you said, "A key for me has been to describe, not judge, and to talk about what I can do to try and change the data -- rather than assign blame because the data is not what I want it to be." From my point of view, program data is used to make things better, i.e. to improve instruction, to improve learning opportunities for students, to identify areas needing greater attention, etc.
In addition to using the required state performance data management system whereby we collect national and state performance data, our program has developed and implemented a database to examine some additional items which include but are not limited to:
- Where do we lose students? after Orientation and Assessment? after advisement and before enrolling? after enrollment? at what point after enrollment?
- What factors keep students from being able to attend?
- What factors help or support students to be able to attend?
- What program areas require our attention for improvement?
- What program areas are working well and can be used as best practices?
- Which students require additional contacts and support?
While data collection and analysis can be a daunting task, the investment in time and other resources can provide tremendous benefits for program improvement, identification of staff development needs, allocation of resources, etc. It can also be a "reality check" for administrators, instructors, staff, etc. when comparing what they "think" or "feel" to be true to what actually "is".
Melinda M. Hefner
Having staff have buy-in to looking at and using data can be a challenge but if programs make the effort to make this a part of what they do, it will become institutionalized over time.
One way to begin to look at data is to bring some data to a staff meeting. Do an activity where everyone gets a chance to look at the data and see what stands out to them. Then go around the room and let each person say what stands out. (Include ALL staff in this- teachers, administrators, support personnel, volunteers). Make a list of the things that stand out and then have a discussion about the different things. It could be that people notice something really good or maybe an area of needed improvement.
We do this with our year end data every year. We look at enrollment, retention, gains, goals achieved, demographics, numbers of ABE/GED/ESL students, inquiries, and more. We usually find a few things to focus on for the next program year from this activity. Some of our best ideas for program improvement have come from our receptionist and office manager, as they were looking at the data from a different perspective.
We also started adding looking at data to all of our meetings. We choose an area to look at and examine (this is done in our monthly program improvement meetings, teacher meetings, and other sub-groups of our staff's meetings). We always try to look at data before making program changes (if the data is available). Teachers eventually have gotten used to looking at the data and seeing how it can inform their practice. They are more aware of their students' attendance and testing information and can be more helpful in helping the agency to make standards.
If your staff are not used to looking at data, start small but keep it consistent. Make it a habit and eventually it will be part of what they do.
We do have great systems in PA where using data is something that is required at the state level. Our program improvement plans that are required for state funding must be built on data analysis. We have many teachers and administrators that do practitioner research. However, this doesn't mean that using data for program improvement is always easy. It is time consuming but without it we are making decisions based on intuition or hunches, which can have disastrous results. Change is sometimes uncomfortable but all staff can see the value of using data once they learn how to look at it and how it can better inform their practice.
I so appreciate how you involve your staff in using data. About how long did it take to get the level of staff buy-in that you have now?
I know many of our programs in this project are surprised at how long it takes to really make these changes to the culture of the organization, and it would be helpful for people to realize this. Progress is often slow, but steady.
Cathay O. Reta
I agree wholeheartedly with your approach to reviewing data. In Kansas, we are required to meet certain indicators of a quality adult education program in order to receive funding. Our instructional staff realizes our data reflects our progress toward these measures. We have an excellent database used by all AEFLA funded programs across the State. The database is available for review by the staff at the Kansas Board of Regents and by any member of our staff. A representative for the adult education program does a yearly on-site visit to review files for accuracy.
I am the only one who enters data. Any incorrect data becomes my responsibility. At our monthly staff meetings, all data is compared to the student's file for accuracy. Because we a very small, rural program (118 participants with 12 hours or more in fiscal year 2008), I am able to continually review data on a weekly basis. Reports are printed out weekly and compared with what goals we projected, building from previous years.
Our staff meetings consist of two full time instructors, one part time instructor, one part time administrative staff and one full time administrative staff. Because we work very closely as a team, we look at the data for the whole program. If there are deficits in any area, we began to look at ways for improvement. Staff members are helpful to one another in giving suggestions.
Overall, when our staff members receive a report, they are very interested in our outcomes. Sometimes there is no awareness of progress or lack of progress until it is seen on paper. The results don't carry any negative connotations but a desire for improvement.
Good morning all,
One of the keys, I think, is that of distinguishing the different purposes of assessment, which is a term that is extremely broad in application. Clearly the ideal is assessment and instruction mutually informing each other. On the other hand, some assessment instruments are used (if not designed) first and foremost as a statistical measuring instrument. Other assessment tools are designed primarily as a source of information to enhance the immediate instructional context which may or may not be (easily) incorporated into systemic data collections which factors into reports or information that go to various external reporting venues.
I think keeping the distinction in mind is as important as is that of seeking to identify viable convergences where the several dimensions and purposes of assessment can be brought together. Otherwise we run into various information overload scenarios or pass on information that does not seem relevant at least to those who are looking at it.
Having excellent translators at programmatic, executive and policy levels helps a great deal, though I wouldn't discount the enduring reality of the various gaps and meanings that diverse stakeholders place on the term.
Ultimately, assessment is a type of literacy practice (referring to the literacy studies school) grounded ultimately (I believe) in the politics of literacy in which political culture, pedagogy, and evaluation are complexly entwined as played out in various discourse power-knowledge relationships in a culturally-based symbolic meaning system in which various constituencies of unequal power and status vie for influence.
On this broad topic, I heartily recommend Jurgen Habermas' important text, Between Facts and Norms: Contributions to a Discourse Theory of law and Democracy
To be sure this book is not an easy read and not all will agree with Habermas' reflections. However, check out the Amazon reviews for some clues of his thinking.
Resources that have been helpful to us are the books Datawise and the Power of Protocols. We were able to collaboratively come up with a data cycle that we could use by using one of the protocols from Data Wise.
As someone previously mentioned, I think it is helpful to see that data is more than numbers. It also includes information drawn from surveys, focus groups, interviews, etc. I am reminded of Project Learn in Akron, Ohio. Their reports showed that
"Students functioning between the 5.0 and 8.9 grade level equivalent attended for an average of 35 hours for the program year while literacy students averaged 55 hours, GED students averaged 42 hours, and ESOL students averaged 65 hours."
So they gathered more data by talking with students in the pre-GED level and found that they did not feel comfortable going from their student orientation into a classroom where the instructor and students already knew each other. Based on that, they re-arranged class schedules to give instructors a half hour to meet with new students before the start of class. That made a difference evidenced by the next year's report -- the average hours of attendance for those students increased to 52 hours.
I think examples like this are what make me excited about "data."
I'm wondering if anyone has other examples to share, or questions about what type of data to review to address specific concerns. Anyone??
Cathay O. Reta
Buy in from all stakeholders is very important. Administrators are often responsible for sharing data with funders, staff, students and board members. What teachers are doing in their classrooms is what leads to the student's production of the outcomes that are reflected in your data. It is a continuum that everyone is involved in. When getting staff buy in it is important to identify and articulate the reason you want them to focus on using data. Initially it may be the reality that we face in education that it is often tied to program funding decisions and resources. Further inquiry will eventually lead to the discovery of how the use of data can inform practice and impact continuous quality improvement. In getting started it is important to meet your staff where they are at. Are they ready for this change? What are their fears? What do they hope to gain? A big part of making this happen is viewing it as an investment. As an administrator you have to invest resources so that your staff will be invested in the use of data. Do they need release time for data collection and time to discuss data with colleagues? Do they need extra paid hours for data collection? Do they need training on how to collect and interpret data?
Here are some preliminary questions to assess the readiness of your staff to begin using data for program improvement.
- How do you feel about the use of data to inform program decisions?
- What benefits do you anticipate with a bigger focus on using data?
- What challenges do you anticipate?
- What resources or support do you need to facilitate an increased focus on data?
If you need to lay some groundwork to move your staff into the use of data for program improvement, start with something small and gradually narrow your focus. Our program has weekly articulation meetings in which we discuss what is going on in the program. Looking at attendance and assessment data, as well as feedback from program staff are regular parts of these meetings. Over time looking at data became a part of our program culture. It has lead to better outcomes for our students and our teachers feel it has helped them to be able to target needs for students and families.
The discussion focused on the Practitioner Action Research (PAR) model, the program improvement initiative from the state of Pennsylvania. Programs are expected to select an area of improvement based on its data. Resources from the project are shared including the PAR handbook and monographs detailing each program's selected area, process, and results.
In Pennsylvania we have a statewide program improvement initiative that uses a specific Practitioner Action Research (PAR) model. Each program chooses its own area of inquiry, based on its data. These data may be hard data (scores, hours, enrollment numbers, etc.) or other data, based on our Indicators of Program Quality (for example, the quality of the adult education classroom environment or depth of partnerships). Last year, there were 61 projects conducted by PA Family Literacy sites. Topics ranged from increasing enrollment or retention hours, implementing scientifically based reading research in the adult classroom, improving children's oral receptive vocabulary, to increasing referrals from partners. In the spring we hosted regional poster shows where programs showcased their projects and results. Each program also submitted a monograph that detailed their question and background to it (the data), the interventions, data sources, results, reflections, and implications for the field.
Monographs can be found at our website www.pafamilyliteracy.org. Left side, click on SEQUAL project, then Monographs.
The website also includes the PAR handbook that helped the programs identify a problem based on data, intervention, choose best data sources, etc.
I evaluated the process to ascertain practitioners' perceptions of the inaugural year of the intentional, systematic PAR process. While it added a layer of work, most felt that it empowered them as practitioners and gave their program "teeth." What was also important is that this allowed them to show highlights of their program and program improvement that mere data do not always capture (e.g. data reported to the state and feds.) Programs used data to inform their question and chart their success. Analyzing and reflecting on the data made it more than mere numbers. This evaluation report is also on the website. It includes a summary of the outcomes from the projects and the perceptions of the participants on the research process.
I checked out the website you referred to and am really impressed with the work. These projects are a great way to get staff to begin interacting around data.
Cathay O. Reta
Drucie and others,
Earlier in this discussion I asked for specific examples (narratives) of teachers systematically using program data to answer their questions. In the SEQUEL Monographs that you suggested we look at, found at
I see several good examples in these monographs. Thank you for calling them to our attention. I would like to mention one, in particular, the Seneca Highlands Intermediate Unit 9-"Cooperative Learning in Adult Education to Improve Attitudes and Skills in Math".
One of the biggest challenges our field faces is that very, very few (I think under 4%) of those in adult secondary education who say they want to go to college actually complete a degree. There are many reasons for this, but one of the biggest is that they cannot pass
(usually required) college algebra. This is because they did not get (positive) exposure to algebra either in school or in an adult literacy education program. It is also because -- even if algebra is offered in their ASE program -- many have negative attitudes about,
or fear of, learning algebra. This study, carried out by program practitioners, looks at the use of cooperative learning as a strategy to help students overcome negative attitudes and increase knowledge of algebra during an eight-week program. The monograph is short, well-written, easy to read, and has some findings worth getting excited about. It would be great if there were other programs, where teachers care about this problem, that could replicate it. I wonder if any programs in Pennsylvania have already done that.
It would be terrific if there were a U.S. national adult literacy research institute (such as NCSALL was) that would make funds available to support programs replicating important studies such as this, to help build a body of professional wisdom on the use of cooperative learning in adult numeracy and mathematics. This might provide a sufficient base of evidence to see if it is worthwhile later to do "gold standard" experimental research.
Thanks, Drucie, and other leaders at all levels in Pennsylvania, who have for many years now supported programs using data for program decision-making. It looks like this may be paying off for Pennsylvania practitioners, as they learn what does and doesn't work
for their students, and it is contributing to a literature of professional wisdom so necessary in our field.
David J. Rosen
For a dialogue about professional wisdom (including a definition) with John Comings, former Director of the U.S. National Center for the Study of Adult learning and Literacy, see: http://wiki.literacytent.org/index.php/Professional_Wisdom
Thanks for the positive feedback. The Seneca Highlands project was one of our most successful and most "researchy," with implications for the field at large. We encouraged Pennsylvania family literacy programs to look at the 2007-2008 projects and replicate or adapt a chosen one for 2008-2009 with the intent of building a foundation of knowledge -- at the LOCAL level. A WERC program this year is doing a math project, also using cooperative learning. Unlike the algebra students at Seneca Highlands, these students have low level math skills (just learning to tell time, count by fives, etc.). Yet, due to the interventions, the students are having high levels of success in terms of math skills and self-efficacy. As the research shows, as confidence improves, so do competencies. The
students are PROUD to be part of the research study --- something that action research encourages and that experimental studies do not.
I rather like the idea of a national support for such efforts!!
Thanks so much for this. The website has really interesting resources, and the monographs are - as Cathay noted - impressive. Practitioner research projects are the perfect framework for working with data.
I really liked the steps that are outlined on the Monographs page: question, intervention(s), methodology, reflection, adjust, analyze results. These are great descriptors for the steps involved in using data effectively.
I also like where you note below that data "gives the program teeth" - this is so true. Having solid data and knowing what to do with it can open many doors with partners and funders, as well as make the program internally stronger and more effective.
Thanks so much for sharing this resource Drucie,
Marie and All,
Yes, solid data is so important to getting support for our programs, especially when accompanied by those great student stories which we all have and tell. It is the story that gets people listening, pulling at their heart strings. It is the data which assures them the story isn't a one-time occurrence, but that our program changes the lives of lots of people and is worth their investment.
The discussion thread began with a subscriber's request for strategies for increasing the percentage of pre- and post-tests students in the program complete. Subscribers shared strategies for addressing pre- and post-testing, student persistence, and student retention.
The discussion shifted to focus on managed enrollment as a primary strategy for addressing the above issues. Several programs and a school district (Miami Dade County) weighed in on their positive experiences using managed enrollment.
Hello. I am a latecomer to this discussion and I hope not too late to get feedback from experienced practitioners. I am the program specialist for adult education at Guam Community College. The difficulty we encounter is in raising the level of our paired tests (pre and post). I am getting only about 60%. Is this average for this population? How are others encouraging students to take the post before leaving? Or how are they making sure that everyone is post tested? What strategies have you put in place?
Hafa dai Barbara! Washington State's assessment policy requires a minimum of 50% on post-test ratio, and the state average is 59%.
We had 6 (out of 50 some) programs below 50%, with 2 programs in the mid-40% range, 1 in the high 30% range, 2 in the low 30% range, and 1 in the high 20% range. These programs will be developing action plans to improve their results on this data element.
Often, their success is dragged down by poor data processes (not just post-testing processes) in outreach sites and not limiting enrollment to inmates with longer sentences in county jail sites.
Programs can receive performance funding for 3-5 point gains on the CASAS test, as well as on other student successes.
I hope all is well with you.
When you say you get 60%, do you mean that you get 60% of the students to take a post-test, or that you get 60% of the students to pass to a higher level when they post-test?
Either way, compared to the numbers we see in Florida, that is great! We see less than 50% of the students take a post-test, and, depending on the level, we usually see about 25-30% of the total number of enrolled students that pass to a higher level. However, when students do take a post-test, we see that about 50% of these students actually pass to a higher level. Programs in Florida use primarily the CASAS Life and Work series for ESOL students. The other tests that our programs use are the BEST Plus and BEST Literacy.
The message we are trying to get out is that programs need to find ways to get students to stay in class long enough to post-test. If a few more percentage points of the students would "persist" until they post-test, the data would show much higher rates of students passing to higher levels. Programs will show much stronger results if only they can hone in on "retaining" students!
Florida has had several local programs begin to implement managed enrollment in their ESOL programs, and the results are astoundingly high! Managed Enrollment (ME) programs consistently see 80-90% of students stay long enough to post-test. And of those who post-test, 70-80% pass to a higher level.
Miami Dade school district adult ESOL program did a pilot of 7 sites with ME, and showed these types of numbers. They called the ME classes "Intensive English Academies." Although the curriculum was the same, the length of the courses was shortened to 7-8 weeks, and after the first week the classes were closed to new students entering. The teachers and students found they were free from the chaos of students coming and going so much, and were able to build on previous lessons better. For more information about the ESOL Academies, visit www.floridaadultesol.org, or write to Dr. Beatriz Diaz, Adult ESOL Coordinator, at firstname.lastname@example.org.
Your statistics on Managed Enrollment (ME) are very encouraging. I am going to save your email to use it later on when speaking with programs that are resistant to ME.
I also find it insightful that your courses are only 7-8 weeks. About how many hours of instruction per week would there be? Do the students have enough hours to qualify for a post-test?
Nancy R. Faux
Your questions are good ones! The schools that have implemented Managed Enrollment (ME) primarily offer the ME classes in the daytime, and they meet 4-5 times a week for 3 hours each day. A minimum of 12 hours a week is considered by most to be okay, but the best amount of time is 16 to 20 hours a week. All the schools that have ME use the CASAS test, and all classes meet for longer than the 70-100 hours recommend by CASAS between the pre-test to post-test.
The main reason the programs in Miami Dade school district chose to make the courses 7-8 weeks (some are 9) in length was to facilitate implementation of Florida's newly-revised Adult ESOL standards. The standards are organized around 7 life-skills topics, with anchor standards for grammar at each of the six NRS educational functioning levels. While it is common knowledge that no teacher can cover all the materials available in 7 life skills topics in 7-8-9 weeks, students can recycle through the material until they spiral upward and on to other levels. The key seems to be that students and teachers are free from the distraction of new students entering the class at all times. I call it getting out of the ESOL shuffle and getting on the ESOL shuttle. ME gives them the ability to build on the knowledge gained in the previous class session. They don't have to review so often or to slow down for those who are getting familiar with the teacher and the class. It is really amazing to see how much students can learn so quickly when they are able to focus and not be distracted.
Another factor is that in the first week (orientation), students are prepped with a strong message of taking responsibility for their learning. Students also sign a contract that spells out their commitment to not miss any classes and to stay for the duration of the course. The teachers show the students that they expect a lot of them, and they make it really happen.
This thread is interesting to me. If you look at the focus areas in the first module of the initiative (see Recommended Preparations for the Discussion at http://lincs.ed.gov/lincs/discussions/assessment/08data.html) you will see that a number of these areas speak either directly or indirectly to the exchanges by Barbara, Phil and Nancy. They focus on things that would affect pre and post testing, student follow-up, achievement of goals, and the like. Managed enrollment also came up during out work with Module 2 as well.
One of the things that Cathay and I found in identifying programs that employed successful practices was a deliberate move away from open-enrollment toward managed enrollment. The consequences of such a move completely affected important items like student testing, recruitment and retention, even staff retention, in very positive ways. It's a layer of accountability that is the responsibility first of the student and therefore can change the dynamic of the program.
Cathay - do you have any comments about managed enrollment and its positive affects on the program? Vivian, Donielle, and Lori - any thoughts here on open versus managed enrollment?
Everyone - your thoughts?
Our program has managed enrollment. At times, instructors have allowed students to come in after a session has started. Most of the time, this has not been effective. The new student must receive the required 12 hours of orientation prior to any instruction. This requires the instructor or other staff member to set aside time to do the orientation. The new student must try to "catch up" on missed lessons in order to not hold up the rest of the class.
When a student must wait for the next enrollment period (our sessions run for 8 weeks), it sets the mandate that there are criteria that must be met. We require pre-enrollment. At this time the student fills out a student record and takes the appraisal in math and reading. Once a session starts and the student has completed the required orientation, a learning agreement is signed which includes days and times of instruction. By enforcing the attendance policy, we have found students who are un-enrolled because of attendance but decide to come back to another session, are more successful.
By following the guidelines in the Kansas Adult Education Policy and Procedures Manual, staff and students have clearly defined practices that must be followed. Adult educators across the State have already done the research for us.
Marie and all,
I also thought it was interesting to see how many programs identified as "model programs" had gone to managed enrollment; although many still keep an open entry option on a limited basis when they feel it necessary. This discovery was not something we were looking for when we went to visit the programs; it was an interesting surprise.
An interesting example coming up in the next module of the training is the Cornerstones Career Learning Center in South Dakota. Once they adopted managed enrollment they found staff was no longer complaining as much about not having enough time! . . . a nice by-product. With the change, they also began monthly half-day staff meetings which include reviewing data together, professional development and more.
On another note, setting student attendance policies can also be effective. A most impressive example is the Hawthorn Family Literacy Program in California. When they established (and enforced) a student attendance policy, instead of losing numbers of students (which is what people cautioned them would happen), it increased their numbers. With students attending more regularly, they made better gains, and the word got out: "You will really learn in this program." They had to set up waiting lists and there is very little student attrition (which means students are around long enough to post test). Did I get that right, Donielle?
Cathay O. Reta
Again, this is a very interesting post and gives substantial evidence to support managed enrollment. I especially liked that teachers had more time for professional development and working together to improve the program.
Nancy R. Faux
Discussion areas include:
- using data to determine whether staff development has been successful
- correlating student data with teacher professional development
- identifying variables that would affect measuring the effects of professional development
- teaching as art vs. science/what can and can't be measured
- teachers as variables themselves
- theory vs practice
- "site-based" professional development
Data is not just for classroom instructional staff to analyze. Our staff meeting yesterday (of teacher trainers responsible for staff development) focused on using attendance and ADA statistics collected since 1999 as a way to determine whether or not our team's staff development efforts over the last several years has resulted in increases in student attendance and retention by students whose teachers have taken staff development workshops. We have an immediate and pressing interest for doing so, as expected district-wide budget shortfalls of millions of dollars are leading some at the district level to advocate for the elimination of staff-development programs in the coming year. We obviously feel that teachers who improve their skills will retain students better than those who don't, but we'd like to be able to point to data that demonstrates that.
Could you please explain how you are correlating student attendance and retention with teacher participation in professional development, or the workshops that you offer? We are exploring ways of doing this, also.
Nancy R. Faux
The attempt to correlate student attendance and retention with teacher participation in professional development is still in very preliminary stages and a professional statistician might say that we're going about it in the wrong way but one aspect of what is being discussed relates to a "retention" score being looked at that is derived from the total number of hours all students enrolled in a class could potentially have attended during a certain time period (if all of those students had attended every hour from the time they had enrolled to the time they left the course or the specified time period was reached) divided by the actual hours those same students attended. So let's say that some 42 students could have attended a maximum total of 3000 hours of class time during the time period being examined. The actual attendance of those students during that time period was 1500 hours. 1500 total actual hours divided by 3000 total possible hours gives a 50 percent figure. By doing the same calculation for every class offered, a division-wide "average retention" figure can be established for a particular type of class.
The idea is that by identifying teachers who have taken staff development courses, and then looking at their individual average retention figure "pre" and "post" training, the effect of the training on an individual teacher's retention might be demonstrated and in turn the effect on all teachers who have attended trainings as a group. I'm not sure what variables other than the training are being considered. Again, these ideas are all preliminary and experimental so they're not for wider dissemination. It would obviously be preferable to have a controlled double-blind study but that seems to be out of reach at the moment…
Thanks for explaining that. I think there are so many variables that any study of this nature needs to be considered very carefully. For example, of what quality was the professional development? Was the information in the professional development used by the instructor? How many of the students in a particular class would have dropped out regardless of what the instructor does due to life circumstances? Is it possible that the professional development had a negative impact?
I am not a statistician, but I imagine some of these variables could be accounted for with a large enough sample. However, my question would be, how large would the sample have to be?
Teaching is both an art and a science. The science part can be measured. The art part cannot. I personally think we need to be constantly mindful of the art of it all.
Creating a community within a classroom, a place where students feel connected and where they believe others care and support them, others including their classmates, goes further than any one method or approach to teaching material.
Since we're the ones providing the professional development, I can assure you that it was of the highest quality!
But yes of course, you're right. There are many variables that would need to be controlled for before one could be sure of a direct relationship between professional development and attendance but this is what we're starting with! Having said that, one figure that is quite interesting to us is that the number of student enrollments required to generate 1 unit of ADA has dropped pretty dramatically in the last 8 years, but our ESL ADA has risen. For example, in the 2000/2001 school year there were some 189,000 ESL enrollments generating about 37,700 units of ADA meaning that it took 5.02 student enrollments to generate 1 ADA. In 2007/2008, there were 167,800 ESL enrollments and about 40,900 units of ADA returning a figure of 4.1 enrollments per 1 ADA. At the least, these figures appear to indicate that at least some aspect of efficiency has increased. Students must be coming to class more often, staying enrolled longer or both…
We're planning on saying it's because our teachers have been coming to our trainings so students like the classes more and feel they're learning more… ;)
There appears to be the assumption that participating in professional development improves instruction. I would not be surprised to find a small positive effect but could it be that the 'better" instructors attend PD while the others.....
Interesting discussion. In my experience, those instructors who seek to participate in professional development opportunities are those who are open to trying new teaching strategies and personal improvement. They are not frightened of admitting that there is always more to learn…which seems to be what we hope for in students too. Resistance to participation in training, on the other hand, is not often a sign of a strong instructor. Of course, just attending is not the important thing, but actually trying to find at least one thing to add to the bag of tricks is the key. I'm not sure that you can measure this formally, but I suspect that if you could review performance reviews of both groups, you'd find that those who openly participate in professional development are identified as better instructors across the board.
As a former training manager I know that some companies would insist on an exit test for employees. Lucent had exit tests. Employees had to demonstrate that they had "passed" the course. These examinations were not perfunctory. There is a body research that indicates that managers who monitor the specifics of the knowledge and skills that were trained get a bigger bang for their training buck. Both of these outcome measures are not the norm but they do occur. In the ed biz and adult ed biz we usually do neither. As a tenured full professor at the University of Nebrasksa, I have taught ed grad courses. Most ed grad degrees are long on theory and woefully short on practice. Biz training is short of theory and longer on practice. And Adult Ed professional development is...
Hi Barry and others,
Of course, teachers who sign up for (voluntary?) PD might be the kind of teachers who actively seek solutions to teaching/learning problems. It may be that you are measuring the relationship between learners' retention and their teachers' motivation to solve classroom problems, not increased (or decreased) retention as a result of the professional development itself.
It is difficult to isolate variables in adult ed. One of the hardest to isolate is teacher training. I think the most promising way to measure impact of teacher training on learner outcomes is in a content or skills area where a control group has a teacher with little knowledge and no training in the area being measured and the experimental group has a teacher who gets specific teacher training (including training on content and skills for herself) in the area. Learning outcomes in that area are then measured and compared for both groups of learners.
Not perfect, of course, but it has potential. Numeracy and certain computer skills might be two such learning/teaching areas where a control group teacher does not have skills or knowledge and where an experimental group teacher gets specific training related to teaching numeracy or computer skills (such as student web page design, or using a classroom wiki or a blog to promote writing).
Has anyone tried an experiment like this?
David J. Rosen
Hello David, Barry and all,
David's point speaks to the importance of small case study comparative analyses, which then can provide a basis for a more extended research project. While one case study may be deemed as anecdotal, which is often characterized unjustly in pejorative terms, 10 such studies point to broader trends.
One of the best places to access such studies is on the National Adult Literacy Database, particularly their research base. One of the core key challenges is to take a close look at 10-12 of such studies on a given topic and critically identify key variables evaluate broad trends. In general, the field of adult literacy research does not yet rise to such a level of sophistication that would encourage such work--good topics for PhD programs and for folks who can dedicate a lot of unhurried quality time to such work and to make the basic insights gleaned publicly accessible.
One of the key problems is the limitations in our research design. The extent to which experimental design is referred to as "the goal standard," other intellectual frameworks that can and do guide quality research get subsumed within an underlying and often unconscious positivist mindset. There is much good work, both of a theoretical and empirical nature that bespeaks of a broader vision of research and of the many values of adult literacy education. Their lack of legitimacy is a principle stumbling block as is the need to greatly enrich both the content and methodological framework that underlies so many reports and research projects in adult literacy. Schools of thought that build primarily on the "thick description" of ethnography as well as the theoretical impetus of critical pedagogy help, but need to be cross-evaluated from perspectives other than their own.
I do think the ongoing flow of work coming out of the new literacy studies holds a great deal of potential. Yet unless the theoretical constructs that underpin such work get greater play in various policy and broad-based policy circles they will tend to remain isolated in various academic and practitioner-based enclaves.
Here are a few resources on the new literacy studies:
In my work I refer to a U.S. and UK version of the new Literacy Studies. I believe the EFF project is an example of the former, though given Juliet Merrifield's UK heritage, it may be more of a blending in intent; though in primarily focusing on literacy "practices" rather than the social and cultural contexts in which such practices are embedded, the bring of the EFF project focuses, and rightly so, more on the US than UK version.
Clearly there's a great deal of room for cross fertilization as well as plenty of opportunity to incorporate a diversity of research methodologies, particularly when methodology is viewed as a tool, not the resource itself of insight and knowledge.
No doubt there's a lot of work that needs to be done.
PS for the computer specialists: How does one do spell check in hotmail?
Good morning Roger and all.
A couple of thoughts on theory:
If one defines that as a construction, then theory, articulated or not as such, is inescapable in any learning process. For me the question is less whether there is too much or two little theory, but its cogency in relationship to whatever problem or issue is at hand-oftentimes in the very shaping of the problem statement itself. The same goes with practice; not so much on whether its short or long, but its quality and comprehensiveness in relationship to the focus of study or problem situation. In this respect, Kolb's cycle of learning is as useful as anything http://www.infed.org/biblio/b-explrn.htm. See also the links to Dewey and Lewin which can be accessed from this page.
What is needed is good praxis http://en.wikipedia.org/wiki/Praxis_(process). Of course what this is in any context may be contestable, but working toward developing good praxis among an informed body of participants in a given field is as decent a way of moving forward as any. On the latter, the communities of practice literature is worth taking a look at: http://www.ewenger.com/theory/. Note, too Jackie Taylor's recent article in the Fall 2008 Volume of the ABE Journal titled "Tapping Online Professional Development Through Communities of Practice: Examples from the NIFL Discussion Lists. The work of Lytle and Cochram-Smith on Teacher Research is important too in working toward the process of bringing theory and practice in closer alignment. The abundance of technical work on 20th century learning theory also needs to be factored in, where we don't jump into simplistic one theory solutions like MI, for example. Broadly speaking, theoretical work and formal research in adult literacy needs to be embedded wkithin educational scholarship as a whole. We've got a ways to go. Working through the difficult issue of good theory construction is an essential part of the process.
I agree and acknowledge the problem. As the late Kurt Lewin observed, "There is nothing so useful as good theory."
However, I think that when our adult educators and other practitioners leave our classes they should know how rather than know about that. The implication of know how is practice and feedback. Before I left Nebraska I taught some classes that focused on theory-based know-how.
Professional development sessions in our division (adult education) arise out of needs assessments filled out by teachers and administrators and are delivered by current and former in-classroom teachers in a mostly "voluntary participation" system; teachers choose to attend for the most part, so our professional development operates under a more "business-like" model of needing to provide services that people want. Some teachers do have an added incentive/requirement of needing a certain number of hours of staff development to advance up the salary scale but that is not the case with all teachers who attend. All staff development ends with an evaluation by participants. The evaluations are taken seriously and used to the improve sessions.
On the other hand, I personally have attended staff development opportunities that were mandated by the larger k-12 system in which we operate over the past 20 years that were absolutely horrendous and of which I'm sure, any positive outcome that could be found in teachers' classrooms occurred in spite of the training instead of because of it. Excellent teachers will find the five minutes of useful information in a two-hour training and turn it into excellent lessons and get great outcomes. Horrendous trainers can take 5 minutes of useful information and turn it into a two-hour opportunity for teachers to read newspapers, instant message, check email and doze.
One trend in staff development that we're trying to encourage is what is called "site-based professional development." In this model, staff development choices arise out of collective decision-making and needs assessment at the local site level. Trainings can be given by the teachers themselves to each other, by off-site "experts," through peer-coaching or any combination of a variety of activities. The key point is that the form and content of the training is determined through a joint effort of the site's teachers and administrators.
I thought this message on the relationship between organizational development and return on investment, from the Organizational Development Network Listserv has some interesting parallels with our discussion on the relationship between professional developments and direct student outcomes. Go here for information on the OD Network http://www.odnetwork.org/resources/discussions/index.php
> From: > To: email@example.com
> Date: Fri, 12 Dec 2008 16:13:43 -0800
> Subject: Re: [Odnet] OD and ROI
> It's unfortunate the terminology "ROI" ever got started with regards to OD.
> ROI means Return on Investment. Return is the revenue received from an
> investment. An ROI analysis is used to determine the revenues expected
> from an investment in a product.
> An organization must invest in many other things other than the products
> (or services) it sells. For lack of another term, I'll use the term
> infrastructure. Infrastructure investments include, for example,
> buildings, computers, light bulbs, salaries, training, legal fees, postage,
> and a whole host of expenses that have these characteristics:
> 1. They are expended to stay in business or to enhance the business
> 2. There is no direct line-of-sight connection with revenues or profits.
> There's no "proof" that spending more will increase revenues or profits. If
> there's an obvious connection, it's only one of many factors contributing
> to the revenues or profits.
> Investing in OD work never produces revenues directly. Sometimes there can
> be a connection drawn with revenues or profits but even then the connection
> is not very tight. Thus OD never (I would say) produces a return on
> investment. Let's stop using that term!
> What actually happens is this: A manager believes that a certain activity
> (whether it's a socio-technical analysis or a leadership coaching initiative
> or painting the walls a different color or upgrading the computer system)
> will have a beneficial impact and is willing to invest in that activity.
> As someone else mentioned, the true measure of value, then, is whether the
> activity had the effect that was expected.
> And it's the role of the manager (not the role of the OD consultant) to
> articulate the expectation at the outset of the engagement.
> Where we really get ourselves in a pickle is when we accept the role of
> trying to "prove" that the OD engagement had a "return" on investment after
> the fact?
Please note: We do not control and cannot guarantee the relevance, timeliness, or accuracy of the materials provided by other agencies or organizations via links off-site, nor do we endorse other agencies or organizations, their views, products or services.