National Reporting System (NRS) - 2005

National Reporting System (NRS)

The following Guest Discussion took place on the NIFL-Assessment Listserv between March 21 and March 25, 2005. The topic is the National Reporting System (NRS).

Summary of Discussion

The discussion guest was Larry Condelli, Managing Director, Adult Education Group of the American Institutes of Research. Larry's work includes development and management of the National Reporting System (NRS). The discussion opened with questions concerning test scores, and whether teachers and other program staff have access to them and if so, how they are used; how test scores could be helpful to teachers and programs; what people's preferred assessments are and why these are more helpful for instruction. In addition, changes to the NRS that occurred in 2005 and that will also occur in 2006 were posted so that participants could pose questions and comments. The changes include: changes in reporting procedures and content; changes to the ESL functioning levels; and changes in reporting pre- and post-test scores. Larry initially explained that the change in reporting focused on adjusting the reporting timeline to fit the Title 1 reporting timeline. A change was made to the ESL functioning levels because they were too broad and the adjustments thus made students' ability to demonstrate educational improvement more easily.

Much of the following discussion focused on the use of test scores. Some programs reported that the use of placement scores was not all that helpful in terms of aligning with scores on the actual tests. The GED was cited as an example of this dilemma. One post noted that data derived from testing was useful only as a guide and could not replace classroom interaction assessment. Larry noted the difficult dichotomy with the NRS approved assessments - that there is a need for purposes of accountability but that these tests can be inadequate for informing instruction. Some discussion of the use of the CASAS to inform instruction ensued. One person noted that test data can be used to identify areas of instruction for both individuals and whole groups; that it can be used to pinpoint faulty class placement; and that it can be used for changes to program and curricula. This person also noted that she regularly supplements standardized test results with student self-assessments.

The discussion then focused on use of test scores versus data about skills. It was noted that tests such as the TABE are not keyed to specific skills and so the scores are not useful for classroom instruction. A typical progression in ABE is for students to study at the TABE level and move to the GED level, but the two tests are not correlated and so TABE scores are therefore not useful as indicators of students' readiness to move to the GED. The discussion explored the difficulty with testing measures as indicators for advancement or instructional purposes with TABE, GED, and CASAS, and it was noted that there is a great need for this type of data derived from test scores.

The Performance Assessment Development Summary document was cited in response to the above, and Larry noted that OVAE requested the development of this document as a guide for programs. State Report Cards were also discussed; it was noted that this is not a focus of work at this time but may be explored with NRS project states in the future. A couple of people noted that their states used report card systems, and described some of the pros and cons.


Announcement

Dear Assessment List:

Please join us this week: March 21 - 25

Topic: The NRS at the Local Level and Upcoming Changes to the NRS

Guest: Larry Condelli, Managing Director, Adult Education Group,
American Institutes of Research

Recommended preparations for this discussion:

You can read about developing assessments at:

Developing performance assessments for adult literacy learners: a summary for NRS guidelines http://www.nrsweb.org/reports/PerformanceAssessments.pdf

Mr. Condelli has posed the following questions for you to begin our
discussion about NRS at the Local Level:

Do teachers/program staff have access to test scores? Do they use them? How? If not, why not? How are they useful -- do they help instruction? What are the shortcomings? What could be done at the state/federal levels to help programs use test scores? For example what kinds of analysis/assistance might be helpful?

Why types of assessment would be preferred? Why are these more helpful
in informing instruction?

How does the NRS decide whether assessments can be used for the NRS?

For Upcoming Changes to the NRS, I will provide a URL for you shortly so you can go to a couple of documents that outline the new changes to the NRS.

Thanks and welcome to Larry Condelli!

marie cora

Moderator, NIFL Assessment Discussion List, and

Coordinator/Developer LINCS Assessment Special Collection at
http://literacy.kent.edu/Midwest/assessment/


Discussion Thread


Hi everyone,

As promised, I'm pasting below the two short documents (which will be at the NRS site soon) that outline the changes to the NRS, for this year, and for 2006. Now is your opportunity to ask Larry any questions you might have about these changes.

I notice that one of the changes for this year is to align better with
DOL's reporting procedures.

Larry, can you explain the reasons for this change? Folks who work in
workplace ed programs or work with DOL, or whose program focus is on the employment pieces - how might these changes affect your program and your reporting?

I note that for 2006, there is emphasis on shifting ESOL levels - looks
interesting what little I know/understand. Larry, can you give us a
little background for the shift? What do ESOL workers think - looks
like there's a shift to focus a bit more on the lower levels.

Thanks!

marie cora


Proposed Changes for Program Year 2005 (Beginning July 1, 2005)

  1. Change Reporting for Employment Measures to Align with WIA Title I
    Reporting.

      *We will change our reporting period for the employment measures to match the Department of Labor's reporting period for Title I programs. Adult education's PY 2005 reporting will remain the same, but in PY2006 employment measures will be reported again and employment will follow Title I. Survey states will need to begin surveying retained employment in all quarters.


  2. There will be a new required table to record number and types of
    local grantees.

      *This table will be similar to one that had been used in years prior to the NRS.


  3. Status of Optional Tables.

      *Optional Table 10 (correctional education) will become required.

Proposed Changes for Program Year 2006 (Beginning July 1, 2006)

  1. Revise ESL Educational Functioning Levels to Eliminate High
    Advanced Level and Add a New Beginning Level.

      *The beginning ESL level has about 30 percent of all ESL learners and is quite broad, covering 20 points on CASAS, for example, which makes it harder for students to advance out of the beginning level. The High Advanced ESL level has also been problematic since exit criteria have been undefined, and there were only about 43,000 students(about 3.6% of all ESL students) enrolled nationally at the High Advanced level each year.


    ED will split the beginning ESL into two levels (Low Beginning and High
    Beginning) and drop High Advanced ESL. Other levels would remain as they are. The table below illustrates the proposed change.



    Proposed ESL Educational Levels

    Entry Benchmark CASAS

    Entry Benchmark SPL Speaking



    ESL Literacy

    180 and below

    SPL 0-1



    Low Beginning ESL

    181-190

    SPL 2


    High Beginning ESL

    191-200

    SPL 3



    Low Intermediate ESL

    201-210

    SPL 4



    High Intermediate ESL

    211-220

    SPL 5



    Advanced ESL Literacy

    221-235

    SPL 6



  2. Reporting of Level Advancement for Pre- and Post-tested Students
    and Multiple Advancements (Tables 4a and 4b)

    *Table 4 remains the same and will still be required.

    *Table 4a, which measures multiple advancements in educational functioning levels, will be discontinued.

    *Table 4b will be required for reporting. This table is the same as Table 4, except only students who were both pre- and post-tested are included.

Hi Marie,

You asked about the reasons for the change to align reporting of
reporting measures to Title 1 (DOL) reporting timeline.

The reason for this change is that the adult education program and job
training programs (funded through WIA Title 1) are required to define,
measure and report employment outcomes in the same way. Employment
programs collect their measures through data matching with the UI wage
databases in each state. These databases are usually about 9 months
behind, so the employment outcomes reported are for previous years.
Adult education, on the other hand, reports for the current program
year. This has made it difficult for adult education programs to report a full year of employment outcomes on students. It also has made it impossible for adult education and employment programs to be compared on the employment measures (which is done by Congress and other agencies when considering funding). Aligning the reporting periods solves both problems: adult education can get a full year's data and its employment data can be compared to employment programs.

The second question you asked was why the ESL levels will be changed.
The change will split the ESL beginning level into low beginning and
high beginning. The current beginning level is very broad, covering 20
points on the CASAS tests and two SPLs, for example. This range made it difficult for programs to demonstrate student progress out of the level. By splitting the level, programs can show student improvement more easily. The other change is that the high advanced and low advanced ESL levels are being combined into one level, primarily because there are relatively few students in the advanced ESL level. Only about 5 percent of students nationally were in the high advanced level.


Hi everyone,

Larry posed some really good questions I think. I often wonder if
people/programs use data for program improvement (and how), so the
variation below on that theme is particularly interesting for me:

Do teachers/program staff have access to test scores? Do they use them?
How? If not, why not? How are they useful -- do they help instruction?
What are the shortcomings? What could be done at the state/federal
levels to help programs use test scores? For example what kinds of
analysis/assistance might be helpful?

For example, of the list of assessments that can be used for the NRS
(and I don't have that in front of me so I don't know them all off
hand), what do you get out of the TABE, BEST Plus, CASAS, for example?
The NRS certainly uses that data - but can you or do you? How and what
for? Does it inform your teaching and your classroom?

Has anyone developed performance assessments that are being used now for the NRS? Intuitively I feel like that type of assessment would be
readily used by teachers, but that it is less accessible for the NRS.
Larry, can you or anyone comment on this?

Thanks,

marie cora


Do teachers/program staff have access to test scores?

We do have access to them. They are given to us when a new student
arrives. Additionally, instructors are responsible for monitoring when
post tests are needed and ensuring that the students are post tested .

Do they use them? How? If not, why not? How are they useful -- do they help instruction?

I can't speak for everyone in my program. I will say I use them - but
only as a guideline. I learned a long time ago that the placement test
scores did little to really tell me what a student knew or didn't know.
I explain to my students that the placement test scores are just that, a means by which to place a student in the appropriate classes to ensure their success. They are not at all useful in terms of instruction and I think they give students a false sense of security. I have students that come in and assume b/c they tested at a 9.something level that they should be able to just take the GED and I have to explain to them why that's not necessarily true. Then I give a student my own pre tests to see where they are in terms of being able to pass the GED often, they come back and admit they're not as ready as they thought. I then have to answer the question "How come I did so well on the placement tests but not on the pre tests you gave?"

What are the shortcomings? What could be done at the state/federal
levels to help programs use test scores? For example what kinds of
analysis/assistance might be helpful?

The shortcomings are that the placement tests are not as broad reaching
as say the GED tests. It is by no means a fair leap to assume that
simply b/c you place at the GED High level according to the TABE that
you're automatically ready to take the GED. That simply isn't so but
students get so caught up in the "placement" level that it sometimes
creates the "I already know that" barrier.

For example, of the list of assessments that can be used for the NRS
(and I don't have that in front of me so I don't know them all off
hand), what do you get out of the TABE, BEST Plus, CASAS, for example? The NRS certainly uses that data - but can you or do you? How and what for? Does it inform your teaching and your classroom?

I use it as a guideline, sort of a baseline to see where a student MIGHT be...but that's it. I have students who come in at the GED Intermediate or High level and yet when I give them the a pre test to determine where their level is in relation to the GED, they are no where near ready - often missing more than 50% of a 50 question test. Additionally, when asked during a personal interview with the student, when they left traditional school and find out they left in 7th grade I find myself asking how they scored at a 9th grade level or even higher in some cases, especially if they have not been previously enrolled in a basic skills class.

I think the placement and test scores are guidelines but not something
that can replace one on one interviewing and assessment done between the teacher and student. I also don't think they are used the same by all teachers.

Katrina Hinson


Katrina,

I think your experiences reflect what most teachers feel about NRS
assessments. We don't have very many assessment instruments in adult
education that meet the rigorous psychometric requirements of the NRS,
and the ones we do use (TABE, CASAS, BEST, etc.) have to be used
sometimes for broad purposes. They do meet accountability requirements
and offer some information about student performance, but they are often inadequate for informing instruction. We do recommend the use of other assessments for instructional and other purposes, although we run the risk of too much assessment on students if we go too far. Limited time is also a factor.

Marie has suggested the use of performance assessments for this purpose. Such assessments can also be standardized and used in the NRS (the BEST Plus is an example) but it is very difficult to do all of the research and development work. However, many programs use performance
assessments or curriculum-based assessments to supplement the
information from NRS tests.

Larry Condelli


I found that when I was both assessing and teaching in ABE/ESL, I'd have to do my own skills breakdowns of CASAS test items and tasks. Teaching the tasks (not just the content, the "right" answer) is also a plus. If curriculum was competency based I could use the CASAS and its curriculum matrix, but it wasn't always reflective of the students' needs. I would do the charting that would allow me to see if an entire group of students, or just selected ones, had trouble on particular test items, and give students the lists of competencies tested and which ones they had trouble with.

Best regards,

Bonnie Odiorne, Ph.D.
English Language Institute, Writing Center
Post University, Waterbury, CT


Hi Katrina, Larry, and everyone,

Katrina, you noted:

"I think the placement and test scores are guidelines but not something
that can replace one on one interviewing and assessment done between the teacher and student. I also don't think they are used the same by all teachers."

I would agree with you on both those points. I don't believe that
teachers use testing instruments the same for very basic reasons: they
don't fully understand what it means to be standardized (i.e.: you MUST administer your test the same way to everyone, or your results are
simply invalid: useless!), and they don't ask the right types of
questions of themselves in order to make an informed choice about a
tool. Teachers and administrators should be asking some fundamental
questions BEFORE selecting an assessment, and these questions have to do with the purpose of the test, the purpose that the teacher has for
giving a test, and then matching the answers to those questions as best
as possible.

I was recently presenting at a conference in which a participant
lamented that her program spent months trying to "match" or "connect"
the scores of the TABE, with scores from the ABLE test (they used the
ABLE, but the funder wanted to see TABE results). Well, you cannot do
that: there are no tests that align with one another unless those tests were developed the same way together. I do know that some performance assessments do their best to align themselves with the NRS levels - the REEP Writing Assessment is one (we had a guest discussion on the REEP several weeks back) - and by the way, the REEP is an excellent performance-based assessment that truly informs classroom writing. Another participant in my session noted that the teachers in her program all administered the TABE however they saw fit (i.e.: ditching the timed piece; giving the placement in place of the full form; ESOL learners were given the TABE for some reason). I told the group that they should not bother giving the test at all and make up scores because it amounts to exactly the same thing.

Would you ever just make up scores and send them in to whoever?
Probably not. But if you don't follow tenets of standardization, then
just go ahead and make up your scores, cuz that's exactly what you're
doing anyway.

I would also agree that we need a more holistic look at a person's
performance - and I think that NRS is trying to do that as well, by
focusing some attention on performance assessments and by making some of the shifts in levels as well.

For a great list of questions that you can ask yourself and your program about the tests you will select or are using, go to the ALEWiki
Assessment section and click on Selecting Assessment Tools. There is
also a section on Commercially Available Assessment Tools that describes the most commonly used ones in detail, and includes some discussion excerpts on these tests.
The ALEWiki is at:
http://wiki.literacytent.org/index.php/Main_Page

You can also go to the Assessment Collection area called Selecting
Assessments for a Variety of Purposes - there you will find web
resources that speak directly to the topics discussed here.
http://literacy.kent.edu/Midwest/assessment/tt_types.html

Thanks!

marie cora

Moderator, NIFL Assessment Discussion List, and

Coordinator/Developer LINCS Assessment Special Collection at
http://literacy.kent.edu/Midwest/assessment/


I'd just like to add a couple of ideas to the exchange about assessments, from an ESOL perspective:

  1. I believe that they can serve a definite instructional purpose.
    Results indicate specific areas for whole class or one-on-one instruction. They can drive curriculum adjustments. They are also a check on faulty initial class placements. Here at NDEC we've begun to develop some writing materials after going over REEP Test pre and mid-year results. BEST Plus results, along with student opinion, resulted in a series of four-week, level appropriate, optional pronunciation practice classes with weekly taping sessions.


  2. Along with standardized tests, I continue to look for student self-assessment materials. These provide some balance to the entire assessment process. An adult ESOL learner knows from life experience outside the place of instruction what his/her performance 'really' is. Encouraging them to bring these real-life experiences to the educational process is I believe, extremely important.

If anyone on the list-serve has samples of student self-assessment
procedures or materials, I'd appreciate receiving copies or being directed to resources.

Maureen O'Brien

NDEC Boston


List discussion regarding assessments,

Here's what I feel is needed with the TABE and all assessments to better facilitate instruction both as a group and as individual instruction. Each question should be keyed to a specific skill that that question is trying to measure, and that skill or skills could be listed in numerical order so that if a student missed question number 7 in TABE 7- M Language you would know that they may need to review the correct "use of pronouns" or "commas in a series" or whatever.

The TABE gives a correlation chart but it is too general and not
formatted for easy information retrieval. I made up my own for all of the TABE tests and all of the GED Practice tests. I am not teaching to the test but rather teaching the skills that that student missed.

What is lacking in the TABE(which most ABLE programs use) is that grade
level scores do not indicate when the student is ready to take the GED
Practice test. This is why I use skills to measure when a student should take the GED Practice test and not TABE scores. I have had some students with a grade level score in Language of 5.2 take the GED Practice test and pass it and go on to take the Official GED test and pass that also with high scores. As of February's testing, our program has 67 graduates with 42 of them having scores of over 500, and we have 2 with scores over 700. This systems works for us.

Kathy Hennessy

ABLE Coordinator

Lima City Schools


First of all, I agree that teaching the skill is the best thing for any
student. Each of my students has learned that it's better to learn the
skill sb/c they will need them as they move beyond the program they are
currently in and into college classes. Those of mine that have completed and come back, often tell my students just how much those skills matter and how important they are ..that it's not all about passing at test b/c passing the test means little if you have to take remdial classes in college because you didn't really learn it the first time.

Secondly, I have some comments re: your email comments, re: GED testing
below the HS level. I know at my school we are told that we should not be testing anyone on the GED if they are not at the High School level - especially if they come in with a low score well below High School (such as 5.2 or 4.4 which we do have a problem with at our school) because for the performance based areas that should be demonstrated the reporting system relies on looking at TABE scores, not GED scores and if there is no progress in their TABE score, it can actually count against you.

Wouldn't it make sense, if they can pass the GED Practice Test, to
Post Test them on the appropriate TABE test to make sure their level
advances as the student is clearly demonstrating it should.

I've run across the problem myself. I have a student who was at the HS
level on everything but reading, she had a 8.4 grade level placement
according to her TABE and she passed the GED after about 3-4 months of
hard work - she attended classes daily from 8-1, with extremely high
scores, and I've been asked to call her back in to post test on the
Reading portion of the TABE to ensure she has moved up a TABE level.
She was due to be post test in April anyway but she completed her GED in FEB- a few weeks before her baby was due. I also know that we have an ABE Low class which is geared toward a placement level of 0-3.9 from
which occasionally students in that level or even in the next level up,
ABE Intermediate, will be given the practice test and they'll pass yet
when they go take the GED test, they don't...they could miss it by
anywhere from only a few points, to a lot of points. The GED Examiner
and myself find ourselves asking a lot of questions: such as how that is possible, especially if it happens more than once from the same student/instructor. The director repeatedly reminds people to make sure that their students are at the right placement for taking the GED. We do allow for exceptions on a case by case basis but it has to be more than "Student A feels like they can pass the test".

I also know that the words "performance based" are being used A LOT
in my program this semester and yet it's still a battle trying to help
instructors who have always assumed that it didn't matter what the
placement score was, whether it was low, intermediate, high or GED etc,
that they were all teaching GED classes simply b/c that was the end goal the student has. Helping them to get to that goal in a progressive stair step manner was never really monitored...and it's kind of hard to teach people to undo what was improperly learned and done for so long. I know it depends a lot on how different programs are set up, I guess.

Katrina Hinson


We have a group of advanced (pre-GED level) students who scored in the
228-235 range of the CASAS. Based on 38 multiple choice questions they
got from 5-12 questions wrong. The tutor was perplexed because she had
taught many of the content items identified on the assessment.

I looked at the common questions that 3 or more students got wrong and
concluded the issue was less the content than the combination of the
phrasing of the question and the prompt that was available to answer the question in which the information required could be very difficult to find.

The assessment I shared with the tutor was that the issue was less the
content per se than the ability of the students to extract the required
information, and that what they might need is practice on, for example,
the GED pre test or pre-GED materials in answering questions from those
long, dense paragraphs that we all know and love. That made sense to
her and she will make some changes in her instructional program
accordingly.

George Demetrion


One of the things you have to remember with the TABE is that there are 5 levels of the tests - from Literacy through Advanced. In AZ we have
determined "out of range" scale scores relative to the test level. For
example, while a learner could score a 602 on an E level test, using
only the scale score would say the student is at ASE II, however the
difficulty of an E level test is nowhere near ASE II. So a combination
of scale score and test level works better.

I developed a tool that "reformatted" the item analysis that TABE has in their Users Manual. I've had teachers use the reformatted tool, and they really like it. Basically it has the test, level, subject and objectives, scale score for # correct, ABE/ASE level associated with the scale score, item number, (place for the answer if your hand scoring), sub-skill, and thinking skill and even grade equivalent associated with the score for each question. Since we use the Survey, I've reformatted for Reading, Language and Math for TABE 7 and 8. Somewhere I have it for the complete battery, but not with the scale scores. TABE has a more sophisticated product - the Individual Diagnostic Profile. You want to figure out the most useful tool, including "ease" of use, and you can even develop a study plan. This can get you and the learner to a starting point. The TABE User's Manual is worth the investment; it has a lot of other good tools.

-Miriam Kroeger

Arizona


Some thoughts for you all:

Use of data from a single item is the LEAST reliable piece of
information provided for an individual. With that in mind TABE is set up so that clusters of items with a commonality of higher level objective are reported. While these are usually not sufficient, in themselves, to target specific instruction they can be used to target further assessment in an area or to suggest broad areas of instruction.
The current version of TABE (9 and 10) does provide a correlation to the GED. The estimate is at the 85% confidence level and also provides a single letter indicating a recommendation. "T for test" indicates that students with a similar score passed the GED 85% of the time and the tested student should take the GED. "R for Review" indicates that
students with a similar score passed the GED between 50 and 84% of the
time and that review in the content area is indicated prior to the
student taking the test. "I for instruction" means less that 50% passed and an instructional program is indicated.

Bill Connor

Senior Product Manager

CTB McGraw-Hill LLC


Hi Larry and everyone,

I wonder if anyone has read/used the following document:

Developing Performance Assessments for Adult Literacy Learners: A
Summary
http://www.nrsweb.org/reports/PerformanceAssessments.pdf

and can comment on it for us.

Also, the info bulletin Enhancing Performance Through Accountability
http://www.nrsweb.org/nrsBrochureFinal.pdf
notes that there are a few projects going on, one of which is developing State Report Cards. I remember that this was happening here in Massachusetts a year or two ago. Can anyone comment on their State
Report Card? What is that like? Is it helpful? Larry, can you comment on any trends you have seen, or what the experience has been like having states develop report cards? Have you found that helpful for NRS purposes? Do you find that that aids the connection between the NRS and what is happening locally?

Finally, NRS On-Line (http://www.nrs.detya.gov.au/default.htm) has some
cool resources - I got a lot out of reading the Case Studies - but I'm
stumped by one thing: Larry - it seems that all the case studies are
from Australia - can you comment on that for us? I know that Aussie has a great history and reputation of excellent adult education work, but I was not aware that some of the NRS development work happened there (?).

Thanks!

marie cora

Assessment List Moderator


Hi Marie and Everyone,

You asked about our performance assessment development summary. This
came about because several years ago, some state directors of adult
education asked OVAE for guidance on developing this type of assessment. OVAE funded a panel, convened by the National Academy of Sciences, to give advice on this and the panel held a meeting and eventually produced a book of papers from the conference and recommendations. OVAE also asked me to write a summary of the key development issues, which resulted in the paper on the NRS web site to which you refer. The paper has references, including the NAS panel report, for anyone who is interested in pursuing the matter seriously.

Your second question concerns state report cards. The idea for this
first came from the K-12 system and a couple of years ago, one draft of
reauthorization of WIA had a requirement for states to develop report
cards for their adult education and literacy system. That requirement
was eventually dropped, but OVAE thought it was still a good idea. So
we have planned at some point in the near future to provide training and resources through the NRS project to states interested in developing them.

Finally, you asked about Australia. Coincidentally, Australia also has
a "National Reporting System." However, it has nothing to do with ours
and is not like our NRS at all. It is more similar to the adult
literacy system in England.

Larry Condelli


Hi Larry and all,

Thanks for your reply.

So! Oops! Here I was looking at all these case studies wondering why
they are from Australia. However, I did find some of what I read there
pretty interesting.

Does the U.S. system have anything similar to what I saw at the Aussie
site? (case studies and stuff like that?) Is their system so different
that it can't inform us, either on a large scale or on a classroom
scale? This might be way too big a question but how are the English and Australian systems different from ours?

Can any List members tell us if they are developing a state report card?

Thanks,

marie cora


I have an additional question to throw in:

Are the English and Aussie systems standardized across their respective
countries?

For instance in the United States, all 50 states seem to handle their
Adult Basic Skills/Literacy Skills programs totally differently.
Furthermore, it's even more "localized than that" - for instance even
though the community colleges in my state are responsible for the
literacy programs such as ABE or GED or ESL, none of the community
colleges do it the same - there are few commonalities between the
programs. Some are said to be easier to complete, some harder etc. Some
students leave one program to go to another one at another school b/c
they can finish in "less time".

The only common thread that I see at the moment is the NRS system where
all the programs have to report on the federal level...but then it's
like looking at apples and oranges without really seeing the differences in the individual programs themselves. One program may have "better" numbers than another simply b/c their completion requirements might be different from a neighboring program.

Under those circumstances it would make having a "state report card"
difficult.

Our public school system here has a "report card system" but it
Doesn't take into account that the school systems across the state are
evolving. They all teach the same standard course of study but the
methods are vastly different as is the funding say in the capital city
compared to the more rural or mountainous areas. Some school systems
have developed "schools of choice" where parents can choose where their
child goes based on the child's interest...for instance: one school
system has schools that are like mini academies: Art and Humanities, Math and SCI, Academically Gifted - - - etc. The core classes are the same but the "extra classes" are geared toward specific areas - be it music, band, dance, math, science, computers, etc. Where as there are some schools in the state that still don't have functional computers for a lot of their students. Even with the States report card they are only looking at the End of Grade Test scores. They are not looking at whether or not a school that as a high success rate on the EOG provides after school assistance for struggling students or tutoring sessions etc. compared to a school that doesn't provide any extra instruction for students who are struggling.

I think having a "report card system" should look at more than just
test scores. I think that's why I don't like what I've learned or
continue to learn about the NRS. It looks at scores....numbers that out
of their real context with the student, really don't mean that much and
don't really take into account the variables that influence a students
success beyond the ability to pass a test.

Just some thoughts.

Katrina Hinson


Marie and others,

Kansas has developed a report card for our AEFLA programs. We do not
assign grades, but we do evaluate each local program's actual
performance compared to their negotiated benchmarks (number served,
number and percentage of EFL completers, percentage of participants
obtaining a GED or adult high school diploma, percentage of participants entering employment, percentage of participants retaining/improving employment, percentage of participants entering post-secondary education/training, percentage of participants gaining the skills and knowledge necessary to become a U.S. citizen, and percentage of participants increasing their participation in their children's literacy and educational actitivies). We also evaluate the local program's performance compared to the state averages on the core outcomes. Finally, we provide a short report on the local program's progress toward meeting their program improvement plan. I would be glad to share this document with anyone who is interested.

Dianne Glass


Marie and all,

As I understand it, the Australian NRS system is a framework describing
learning skills and competencies that learners should know at different
levels. It was described to me once by Australians as being more like
EFF that our NRS, except instead of roles, the Australian NRS is
organized around what it calls "communications." So it is somewhat
like a curriculum framework.

As for report cards, they usually include NRS and other outcome
measures, with an evaluative component (performance standards, grade) so you can judge how well the program is performing. Most report cards
also include other information on students, teachers, instruction and
providers. The format and content are quite varied and one of the
things OVAE wants to try to do is simplify and standardize report cards.

Larry Condelli


Dear all,

I wanted to thank Larry Condelli for being our guest this past week on
the Assessment List. Thanks to everyone who made the conversation
interesting.

I hope you all have a wonderful weekend,

marie cora

Moderator, NIFL Assessment Discussion List, and

Coordinator/Developer LINCS Assessment Special Collection at
http://literacy.kent.edu/Midwest/assessment/




Please note: We do not control and cannot guarantee the relevance, timeliness, or accuracy of the materials provided by other agencies or organizations via links off-site, nor do we endorse other agencies or organizations, their views, products or services.