- Terminology and Noncognitive Variables
- Are Formal Noncognitive Skills Assessments Possible?
- Implications for Adult Education Based on Noncognitive Assessment
- Noncognitive Skills Assessment and the GED
- Addressing Cultural Differences
- Noncognitive Skills and Distance Learning
- Noncognitive Skills and Economic Class
- Philosophy and Noncognitive Assessment
- Faking Answers on Tests
- Oregon State University Future Student Publications
Terminology and Noncognitive Variables
(Includes discussion of ESL/ESOL)
Welcome to our discussion with Dr. Patrick Kyllonen and Dr. William Sedlacek on Assessment of Noncognitive Skills in Adult Basic Education.
If you have not had the opportunity to read the complete information on this topic and our guests, please go to:
Subscribers: Do you feel that it is important to assess noncognitive skills of your adult students? Why or why not? Do you assess these skills already in your program or classroom? If so, how do you do this? What types of activities or tools do you use?
I also invite Drs. Kyllonen and Sedlacek to pose any questions or make any comments that they may have.
Please send your questions and comments to the discussion list now.
Assessment Discussion List Moderator
Dr. Kyllonen has offered us an additional resource to enhance this discussion. You can find the paper entitled:
“Personality, Motivation, and College Readiness: A Prospectus for Assessment and Development”
by Dr. Kyllonen and his colleagues at ETS at:
Please note that this paper is still in DRAFT format; many thanks to Dr. Kyllonen for sharing this resource with us before it’s even hot off the press!
Assessment Discussion List Moderator
During this discussion I have for the most part not provided specific citations, but many of them that I have alluded to can be found in this paper, which incidentally was supported by the Bill and Melinda Gates Foundation.
Hi all: I would like for Drs. Kyllonen and Sedlacek to discuss the differences between cognitive and non-cognitive aspects of psychological or mental life. Some have argued that there are no non-cognitive skills. Is that correct? Is the word "skills" applicable to non-cognitive factors that are discussed by Kyllonen and Sedlacek?
Thanks, Tom Sticht
Thanks, it's great to be here. There is obviously a lot of fuzziness in the borders between cognitive and noncognitive, and between skills, attitudes, and dispositions. Then there is the personality vs. process (trait vs. state) distinction that fits in there somewhere. On top of all that there are arguments about terminology in which it is sometimes difficult to figure out whether the dispute is over measurement methods or constructs per se. But to get to your question, noncognitive skills is a useful term. Take for example, time management. We know from empirical research that it is difficult to differentiate time management from the big 5 personality traits of conscientiousness (Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism). That is, highly conscientious persons tend to be good time managers. We know from lots of empirical research that personality is reasonably stable over the lifespan. At the same time, we certainly believe that it is possible to improve our time management skills, and there are many training courses available for doing just that. There are other noncognitive skills that have this kind of flavor--they're related to personality traits(empirically) but we believe they are trainable. Examples that spring to mind are teamwork, leadership, test anxiety, and even negotiation skills. All these skills are correlated with personality traits (teamwork with agreeableness and extroversion; leadership with extroversion; text anxiety with neuroticism; negotiation with extraversion) but at the same time they are thought of as skills.
Thank you Patrick,
As I was reading through your message, it brought to mind the Equipped for the Future framework which builds in all of these components of learning in a very comprehensive manner. http://eff.cls.utk.edu/
It is this which potentially makes EFF exceedingly rich for adult basic education even as this framework gets very little play outside of those directly involved in it.
I had the opportunity of introducing EFF to a group of graduate students this summer in an online course on adult education curriculum development. These students either had never heard of EFF or they've only heard reference to it, but had no substantive knowledge. In our two week study on EFF in which we covered a great deal of ground, to a person, each of the nine students saw an incredible potential for EFF utilization for their courses this fall.
To All- I agree with much of what Dr. Kyllonen had to say-. My approach has been to concentrate on developing dimensions and measures of them that work; that correlate with success in school or elsewhere that can be used developmentally by teachers, parents and others. I have studied and developed a specific set of noncognitive variables that I am referring to when I use the term. They are described in the article that was provided. They include self concept, long term goals, handling the system, including any "isms" or prejudice, and having a support person. They could be thought of as skills in that we need them to be successful and they can be improved upon. It is not important what we call them. I have used the term over many years because much of the earlier psychological literature used the term cognitive to refer to verbal and math abilities, so anything else became noncognitive- Bill
William E. Sedlacek
If noncognitive skills comprise all skills other than verbal and mathematical ones (based on early psychological literature), then this may be a good time to re-evaluate the cognitive vs. noncognitive distinction.
I say this not for the sake of changing labels or engaging in semantics, but because if, indeed, "noncognitive" skills draw (also) on cognition (as I believe), then going forward we will be working within a revised conceptual framework that may also alter how and what we teach, how and what students learn, as well as how and what we assess in the teaching-learning cycle.
Michael A. Gyori
Maui International Language School
I think everyone agrees that noncognitive (or non-cognitive) is a terrible term. For one thing, it’s obviously wrong. Noncognitive attributes certainly have a cognitive component. For another, identifying anything by what it’s not is always problematic. For a while the term noncognitive got a boost when James Heckman started using it and there is a nice labor economics literature using that term. But even he is backing away from that term now preferring to divide the noncognitive realm into personality and motivational processes (e.g., http://ideas.repec.org/p/nbr/nberwo/13810.html ). Noncognitive is still good as a search term, but clearly, it can be confusing including, as we discussed earlier, everything from extroversion to metacognition, emotional intelligence to self-regulated learning, and creativity. We have used the term personal attributes in some contexts, personal skills in others; the Conference Board et al paper “Are They Really Ready to Work?” used applied skills, and the term personality gets used a lot. We have conducted a number of focus groups and other kinds of surveys to try to figure this one out—with little to show for our efforts. Given all the attention paid to the terminology, and the simultaneous resistance to forward progress, I think we will be stuck with many labels for a long time.
Drs. Kyllonen and Sedlacek: One of the most frequently occurring outcomes of adult basic education programs is that people report that they feel more confident about themselves with regard to their abilities to learn. My question is: Does self-confidence qualify as a non-cognitive skill? And if so, does it have dimensionality such that it could be assessed at intake to a program and then later on to determine if self-confidence had increased during the course of the program?
Tom, I'm glad to read that confidence in their ability to learn occurs frequently with students in Adult Ed programs. I note a similar phenomenon in that, as students gain confidence in their ability to solve math problems, for instance, their confidence in their ability to pass the GED math test increases dramatically. I focus on this by challenging their confidence often. My classroom is very supportive and respectful and I pose the challenge in such a way that they (usually) know that I'm not trying to take them down, rather to give them opportunity to "defend their position". As an example, after a student has answered a question, often with a question mark in their voice, I'll ask if they're sure. Often I ask if they're willing to bet $10 on it. (That has become a trademark question in my classroom and I regularly remind them that I don't have the $10, essentially that it is a rhetorical question.) I wait them out, so that they have to rely on their own wit rather than rescuing them with hints, etc. They usually relish the triumph of matching wits with the teacher AND prevailing.
Tom- A pre post model would be useful, ideally with some control section of a course that does not get the same program to avoid a placebo or Hawthorne effect. You could also assess a longer term effect of the course. There are many possible aspects of self concept & focusing on one aspect would be recommended- Bill
William E. Sedlacek
Great question! According to the theory of planned behavior, intentions lead to behavior, and attitudes (along with subjective norms and perceived control) underlie intentions, so you might argue that enhanced confidence (an attitude) should promote intention to act and then behavior. Certainly there is widespread belief in the role of confidence on performance. BUT there often also seems to be a disconnect between confidence and performance. Lazar Stankov has shown that confidence can be independent of performance across a wide range of tasks. Cross-cultural work has shown that countries vary widely in their confidence, and in some cases negative correlations can be found between confidence and performance at a national level (e.g., at a given level of performance Americans on average tend to be highly confident, relative to students from certain other countries). There is a lot of attention being given to the topic of metacognition, for example metacomprehension, which concerns one's ability to appropriately calibrate one's knowledge with one's comprehension and performance level. A general finding in the area is how poor we are at metacognizing or metacomprehending. The idea is that increasing one's ability to appropriately calibrate should lead to better decision making on the student's part about when to study more and when to move on.
Terminology in this domain can be confusing. Historically this domain was referred to as "conative"......as first articulated by Spearman back in the early 1900's.
I actually believe the work of the late Richard Snow is extremely relevant to this research and discussion. His final book, finished after he died by his students (e.g., David Lohman, etc.), was called "Rethinking the concept of aptitude." In the book it is argued that we need to return to the historical and classical definition of "aptitude"....not to be confused with IQ. Briefly, a person has an "aptitude" for domain X based on a particular combination of cognitive (IQ type) abilities, personality traits, volitional characteristics, etc. It is the specific combination of cognitive+conative that makes a person have "aptitude" for a domain, occupation, task, etc.
As an applied psychometrician and test developer, I have an "aptitude" for my work as I'm relatively high in fluid intelligence and quantitative thinking (more IQ type abilities), have good mathematical skills (more traditional ach domain), am high on certain personality/affective traits (openness to intellectual experiences; tend to be introverted and like to work alone; ), certain motivational traits (high persistence; ability to delay immediate gratification for long-term rewards), etc.
This is a really good point—Snow also suggested (with the late Marshall Farr) a kind of triumvirate of cognitive, conative, and affective processes, corresponding roughly with thinking, striving, and feeling, all of which are important in performance in school and in the workplace. I think this triumvirate maps nicely to the trait (affective) vs. process (conative) distinction as well. This also shows up in the personality literature (e.g., McAdams) as a distinction between traits vs. motives, goals, and actions. Think for example about a person who is generally highly motivated (trait like) but not for this particular task (due to being tired or not interested). Or, the opposite, generally or typically not very conscientious at a trait level, but really turned on by a subject matter due to the inspiration of a particularly engaging teacher. It seems like an important distinction.
Greetings to all,
Patrick wrote, I think everyone agrees that noncognitive (or non-cognitive) is a terrible term. For one thing, it’s obviously wrong. Noncognitive attributes certainly have a cognitive component. For another, identifying anything by what it’s not is always problematic.
Further, I remember one contributor saying that whatever wasn't directly related to core academic skills was lumped together under the heading of "non-cognitive."
Just as the prevailing notion prior to 1983 or so of a unitary intelligence was uprooted by Howard Gardner with his notion of multiple intelligences, so has the rather unitary notion of literacy as pertaining (etymologically appropriately so) to reading and writing skills evolved into an increasingly prevalent notion of multiple literacies. Multiple Literacies and Critical Pedagogy in a Multicultural Society by Douglas Kellner at UCLA exemplifies this "paradigm shift" (see http://www.gseis.ucla.edu/faculty/kellner/essays/multipleliteraciescriticalpedagogy.pdf).
If multiple intelligences have the makings of a valid construct, then it seems clear to me that there is a close association between those intelligences (however one might choose to delineate them) and multiple literacies, at least in terms of informing instructional practices and ensuring that they are strength- rather than deficit-based.
Literacy in whatever manifestation cannot be disassociated from cognition.
In light of the above, I'd like to reiterate that I remain very challenged by the "cognitive" vs. "noncognitive" dualism, and would suggest that it has no meaningful basis either in theory or in practice. I suggest we stop thinking in and using such terms. If there is such a thing as "emotional intelligence," for example, to disassociate it from cognition makes no sense to me at all.
Words and labels can be very powerful in terms of informing educational policy, instruction, assessment, professional development, and funding requirements, to name a few that come to mind. The less we do to address lingering constructs that are tenuous at best, the longer they will linger...
Michael A. Gyori
Maui International Language School
Michael and All:
In a report on Functional Context Education (FCE) online at
I discuss a heuristic model of what I called the “architecture of the human cognitive system”, a term borrowed from folks working in artificial intelligence. In FCE that aspect of mind involved in learning and cognitive development is referred to as the human cognitive system, and includes three major components of the cognitive system: the sensory component-perceptual component-and the memory components subsystems (see Figure 5.1 in the referenced online report). The memory subsystem is further analyzed into two aspects: the knowledge base or long term memory, and the working or short term memory. The working memory is where active thinking takes place. Thinking processes lead us to pick-up information from the outside environment and the internal knowledge base and combine these two sources of information to construct our understanding of the world at a particular moment. These cognitive information processing activities are embedded in and preceded by, accompanied by and followed by psycho-physiological processes (including emotions).
In this model, then, all cognitive processes take place within a context of non-cognitive processes, and the dual types of processes are co-existent. Using Tom Bever’s phrase from a conference a long time ago, “each is the sea in which the other floats”.
But even though these cognitive and non-cognitive processes are co-existent, their separate influences on learning and behavior can sometimes be separately observed. For instance this distinction between cognitive and non-cognitive skills has been illustrated with young adults who have wanted to enlist in the military services. Generally, military policy rejects the enlistment of non-high school graduates (NHSG) because, compared to high school graduates (HSG), they have high rates of failure to complete their full term of service. However, in some cases NHSG have been permitted to enter the military and research has indicated that if they were willing to delay coming into the military for up to seven months after they were qualified, instead of insisting on coming in as soon as possible, then their completion of their term of service was about the same as that for HSG. (illustrates Sedlacek’s non-cog variable #4: Prefers long-range goals to short-term or immediate needs. Able to respond to deferred
Other research indicated that non-cognitive skills of delayed gratification could substitute for as much as 50 percentiles in cognitive skills. In this research, NHSG below the 30th percentile on the Armed Forces Qualification Test (AFQT-a literacy and numeracy test) who choose to wait for seven or more months to enter into the military after they were qualified had completion rates of around 68 percent compared to 72 percent of HSG who scored above the 80th percentile on the AFQT but wanted to enter as soon as they were qualified.
In these studies then, the non-cognitive processes of delayed gratification appear to have been almost as important as completing high school and/or having high cognitive skills in influencing the persistence of these young NHSG adults in fulfilling their military obligations satisfactorily.
For many years, Dorothy Rich has been involved with the teaching of what she calls MegaSkills (see http://www.megaskillshsi.org/). According to the MegaSkills website, “MegaSkills are the attitudes, the behaviors, the habits that determine achievement in school and beyond. The MegaSkills Education Center is dedicated to building achievement, developing within each student--Confidence, Motivation, Effort, Responsibility, Initiative, Perseverance, Caring, Teamwork, Common Sense, Problem Solving, and Focus and Respect. Discover on this website how to learn and teach these MegaSkills.”
Admittedly, while sparse, there is, it seems, some research and
professional wisdom that suggests that attention to non-cognitive as well as cognitive processes may be fruitful in improving retention, learning, and transfer of basic skills from the classroom to the world outside the classroom. That is why I am finding this discussion so interesting.
Tom, Michael, and all:
To add one more justification for the cog vs. noncog differentiation. There have been a couple of noteworthy meta-analyses, one on workforce skills documented in Schmidt and Hunter 1998, and the other recently on education outcomes (grades) by Arthur Poropat that are worth looking at. Both meta-analyses clearly establish (a) the independence of personality (i.e., noncognitive attributes) from cognitive ability, and (b) the incremental validity of noncognitive attributes over cognitive ability in predicting outcomes. Poropat finds for example that intelligence (i.e., cognitive ability) is independent of conscientiousness (a noncognitive attribute pertaining to "working hard," "meeting deadlines," and "being organized"), that is they are uncorrelated. At the same time both variables, intelligence and conscientiousness predict college grades, in about equal measure, and while controlling for high school grades.
Tom and all:
Tom the research you referred to regarding delayed gratification with NHSGs resonates with me regarding my work with students studying in my class for the GED test.
One of our issues in Adult Ed, especially at the Resource Center, where we serve many clients who also are clients of one or more other social service departments, is that a client will inquire about classes, register by filling out a student information sheet, be assigned a start date and time, and will not show up for their orientation and testing session.
Given the research reported by Tom Sticht comparing NHSGs who delay gratification and HSGs who don’t want to delay gratification, I feel better about assigning GED session start dates that may reach forward over three months time. I also feel even greater conviction that these clients be given the opportunity to "walk in" to any session start date so that, if any registerees don’t show up by the appointed time, seats be given to walk-ins first come, first served. This rewards personal initiative and gives the client more ownership of his/her education.
Adult Ed. Instructor
A question for the discussion: my university consists of many but not all undereducated, first generation college students whose reading/writing levels correspond roughly with intermediate ABE, with many skills gaps. As we are a career-oriented academic institution (i.e. not trades or certifications) we have instituted a Career and self-awareness strand, which for transitioning first-year students must consist of gathering the cognitive and non-cognitive skills needed both to be successful in college and prepare for a career. We have to re-tool the course next year from a four-year sequence to a two-year (freshman/senior) sequence, that's ideally designed to be "1/2 self -awareness and 1/2 academic/career awareness." It's the self-awareness piece that I'd link with non-cognitive skills, with all the caveats of terminology we've discussed. I wonder: what are ways students can become "self -aware" of non-cognitive skills? In what ways are awareness of non-cognitive skills overlapping with metacognitive skills, reflection on one's own learning, going toward critical thinking? What curricular elements (such as student self-assessments or administered assessments), exercises such on time management, et al. would foster the self-awareness component? I'm very interested in the overlap between non-cognitive skills, personality types and the ways in which academic and non-academic life habits can be predictors of success.
Thanks in advance for your assistance,
Bonnie Odiorne, PhD Director, Writing Center Adjunct Professor of English, French, First Year Transitions, Day Division and ADP
Post University, Waterbury, CT
Hi Bonnie et al,
Perhaps this sharp distinction between "cognitive" and "non-cognitive" knowledge comes out of cognitive psychology itself, especially the informational processing metaphor. The various emotive and cognitive learning functions are intimately linked among both the most erudite scientist and the most unlettered adult literacy student. How could it be otherwise?
With you I'm also working with college undergraduates on basic study skills in which time management and goal setting are as important, if not more so, ultimately, as grounding points for motivation, as those of effective note taking, reading college texts, and academic writing. I had previously mentioned the EFF framework as representing a solid blending of the "multi-intelligences," if you will, of a broad range of "multi-literacies" adults need to enact in order to thrive in the "information era" of our times.
I don't see where it even makes sense to create sharp distinctions between cognitive and non-cognitive knowledge since motivation and interest themselves are so essential for high order learning of any sustained type. Again, I refer back to the root metaphor of an informational processing schema as the mental mind set that may ground such a distinction. I'm not saying that is the case, but it seems more than a little plausible that it may be, which would warrant further analysis and a sifting of the evidence, assuming it's a question worth pursuing in the first place.
Thanks for that EFF link from your earlier post. Definitely worth checking into.
One of the key differences between cognitive vs. noncognitive attributes has been in how they are assessed. Cognitive skills are typically assessed with tests—knowledge or skill probes that have right and wrong answers. Noncognitive attributes are typically assessed with ratings (self or other), most typically Lickert scales (e.g., strongly disagree to strongly agree, or very characteristic of me vs. very uncharacteristic of me.) Ratings and Lickert scales seem very subjective, inferior to a test in their validity, and easily faked. For example, think of the difference between a test of one’s vocabulary (e.g., define the words “perspicacious,” “protuberance,” and “parameter”) vs. a self-rating (e.g., “on a 1-10 scale, how extensive is your vocabulary?”). Which approach do we think provides a more accurate assessment of a student’s personality? I think most people would say the test does.
This is the situation, more or less, with noncognitive attributes. We do not really have tests to measure them, so we are stuck with ratings. Consequently, many people are skeptical of the importance of noncognitive attributes. When you ask people why they are skeptical, it seems that they do believe that the tendency to work hard, the ability to get along with others, the ability to communicate effectively, and the quality of being able to remain calm under stress are important. It’s just that they don’t trust our methods of measuring them.
So, if only we could come up with a test of noncognitive attributes. A test like the tests we have for reading, writing, and math. That would go a long way in convincing many people that noncognitive attributes are important.
There is some work with such “more objective” tests. The leading candidate at this point is the situational judgment test. Consider this example of a situational judgment item testing one’s ability to work with others:
Your team is writing a business case for creating a new flavor of soda. You have a tight deadline, and everyone is extremely busy. You are a new employee and unsure if your section of the report clearly describes the results of the taste-test research.
Which of the following is the best course of action to take?
(A) Work as hard as you can on your section until the deadline.
(B) Write several versions of your section to submit to the team.
(C) E-mail a draft of your section to the entire team for comment.
(D) Seek out an experienced team member for advice on your draft.
In fact the College Board has completed an impressive program of research testing exactly this idea for college admissions http://professionals.collegeboard.com/profdownload/pdf/ATP_2009_Secure_Testing_AW.pdf More research needs to be done, but this seems like a very promising approach for assessing noncognitive attributes with something that is not a rating, but looks more like a test.
This looks like it could be a useful tool, but how do you keep the
respondents from overly inflating the items they think the employer is
Kaye Mastin Mallory
I think it's fair to say that on any assessment people can "overly inflate the items they think the employer is looking for" and in fact some of them do so, especially when the stakes are high (e.g., getting a job, getting admitted, getting the scholarship). But it's not just faking per se. It's possible that some people actually think they work well with others when in fact others don't share that view. Or they think they are hard workers when what they mean by hard work is very different from what others mean. These are the problems with self assessments. But despite that, they still work reasonably well. In fact, they are widely used in industry for personnel selection decisions. (see, for example, http://www.shl.com/WhatWeDo/PersonalityAssessment/Pages/OPQQuestionnaire
Part of the reason they can be used in industry is that each company uses a different instrument, and test takers might guess what the employer is looking for, but they don't know for sure, and they might think it's best just to be honest, or even modest, especially because there may be a way for the company to figure out if they're faking.
This all changes if the instrument is used widely, as any selection instrument used in higher education, for example, might be (e.g., Medical college admissions, College admissions, Law School admissions), or even selection into the U.S. Army. In those cases, a huge coaching industry is already in place (think of Princeton Review, or the Arco series). If a self assessment for example for medical school were put into operation, can you imagine how a coaching company would try to figure out exactly how it was scored, what faking detection mechanisms were in place, and how to best answer all the questions? This is why there has been so much attention given in the scientific research literature to ways to detect and prevent faking.
Low stakes situations are very different. When there is no need for a student to "overly inflate" his or her scores, for example, if they are used for counseling, or figuring out what skills a student should work on, then self assessments can be extremely useful.
While I agree with Patrick that assessments of cognitive and noncognitive variables may differ in the methods of measurement I do not believe that is the relevant point. What is important is how a measure relates to some criterion or outcome assessment. A "test" format may not work well in measuring noncognitive variables, but I encourage attempts to develop one. What may be a bigger problem is the logic that we can have a national measurement of any type that would reflect enough of the idiosyncratic needs of a given program, school, group of students etc. Maybe the Three Musketeers were wrong- all for one and one for all doesn't work here. You probably cannot assess variance related to diversity by trying to have a single measure that applies to all; even if we try to interpret scores from that national measure. I have tried to develop a set of dimensions that requires some interpretation of what each dimension might mean for different situations & groups. For example, the issues that White women face in dealing with sexism are likely much different than Asian American men might face in racism against them. The overall dimension is working the system & handling prejudice but it would be hard to construct a "test" type item or set of items that would capture what we wish. An interpretation based on some other type of response would likely be required. I have tried to provide evidence of the value of eight noncognitive dimensions and examples of how they have been measured in different ways & their resulting scores validated in different contexts. Bill
William E. Sedlacek
Hello respective and respected colleagues:
I'm new to the Literacy Domain, however, have some experience with Career Development and Planning with respect to assessment and workforce/career/college readiness, ACT has an array of formal and informal measures based upon research as well. Allow me to suggest the review of a selected few:
Janice L. Hastings, Director
Literacy Resource Center,
Academic & Student Services, Basic Skills
NC Community College System
I'm from the ESL side of ABE. What am I missing? While I see some positive application of noncognitive attributes to GED and college-bound students, it seems to me that ESL can't really be considered part of this discussion. Studies just aren't normed for ESL students; they are, naturally, normed for native speakers. As an example, you wrote about rating systems. Not long ago I worked on a project with elderly immigrants who spoke varying degrees of poor English. They had to rate their health--poor, fair, good and excellent. Even with interpreters, they often couldn't distinguish between fair and good. I ended up with a three-point visual rating system--thumb up, thumb horizontal and thumb down.
Language proficiency is the key to communicating about getting along, effective communication, and handling stress calmly. For ESL students who lack: English fluency, American cultural competence, adequate communication skills, job security, reliable work hours, and/or ABE programs with flexible attendance policies and program hours, these external variables often present pressing to impossible barriers. Is it naive to think that Mazlovs' hierarchy of needs to be a consideration when considering when, how and/or if it's useful to apply noncognitive attributes to adult ESL students?
It is true that the self ratings can work well... especially when combined with 360 data, meaning ratings from one's peers and superiors as well when it comes to personnel development, hiring, succession planning, etc. I used to work for a consulting company that did extensive work in this area. Furthermore, when comparing the behavior of the "outstanding" and "average" performers, the difference between the two was always related to the "soft skills," or competencies, or emotional intelligence factors, or the non-cognitive skills. Whichever term you prefer to use, superior performance was not typically a function of the "hard," or technical skills.
I do agree with Carla and others, however, that to do such rigorous assessments in the ESL world seems challenging at best. Perhaps more appropriate is what we already do.... ongoing, intuitive assessments from the intake process and teacher/student interactions to know how to best guide individual students to success.
>>Subscribers: Do you feel that it is important to assess noncognitive skills of your adult students? Why or why not?
I'm not sure that important is the word I'd personally use in my classroom situation. Regardless of what any NCSA (non cognitive skills assessment) would say about the student, that person would still be accepted into my class. Useful might be a better word.
I think an NCSA could be very useful because in our program, our funding comes from success. If a student progresses to the next educational level within our GED program, we receive state funding. Then if that same student ultimately passes the GED test, we receive funding again. We need this funding for our program to remain free to all students.
An NCSA could be indicative of which students will remain in our program long enough for them to personally succeed, and for us to get funding. A negative NCSA could alert the teacher that this student is likely to drop out, thereby enabling the teacher to work more stringently on retaining that student. Furthermore, an NCSA could also be personally useful to the student in order for them to become more self aware. (It isn't all about the money.)
>>Do you assess these skills already in your program or classroom? If so, how do you do this? What types of activities or tools do you use?
We don't have a formal system to measure that. We administer TABE tests for the educational level, but that's it. That's not saying that we don't judge those skills though - judging people is human nature. I can spot students who have trouble meeting deadlines, who don't work well with others, who aren't punctual, who are not likely to succeed in the program, all without an NCSA. As a teacher, it's important to recognize these traits in our students.
I've read the material on this topic, and I must say that I'm very skeptical as to whether a truly objective NCSA is really possible. I'm curious to see where this leads.
Kaye Mastin Mallory
This pretty well describes the way our program works and my feelings about non-cognitive skills assessment. I think any teacher who has some experience and truly cares about the learning of students, will automatically (and sometimes unconsciously) assess non-cognitive skills as well as doing the required cognitive assessments. Both are important in helping students succeed.
Kirksville, MO AEL
I agree with the following participants and their takes on non-cognitive assessments. Our adult ESL program also does not do any formal NCSA, and I honestly wouldn't see that as being the most useful tool anyway (although I am certainly open to having my mind changed on that one). What I can say, however, is that our program director does an excellent job at intake of assessing an individual's "life situation" (i.e., cultural background, current circumstances in the U.S., employment and/or family issues, educational background, etc.) to determine if there is anything we can do to best ensure success for that person. The teachers are made aware of any pertinent information and can give special help as necessary. I do believe this does in some ways increase student retention for our program.
Don Bosco ESL School
Alicia, Kaye, Annell,
It sounds like each of you is engaged in noncognitive assessment through observation. The issue is whether there is a need for a more formal kind of assessment. I think that is an excellent question. Or, a couple of questions: One is, do self assessments add anything to what a teacher can assess in a student? And another is would a more formal assessment add anything to casual observations, or clinical judgment?
We did a study where we asked students to rate their own personality, and for their parents to rate their child’s personality, and found that the two sets of ratings did not perfectly agree, but both added to the prediction of school grades beyond what the other rater said. Similarly, we found in another study that student self ratings, and ratings by teachers both independently predicted school grades. In general, in the literature you find that self ratings and others’ ratings are not the same thing—ratings by others are slightly more predictive of outcomes than self assessments are, but both add something unique. Also the literature on clinical judgment suggests that we’re not always as good in sizing up the other person as we think we are. For example, unstructured interviews are typically found to be uncorrelated with subsequent performance on the job.
This is not to say that ratings by others are terrible—they’re not. But they’re not perfect. For example, a typical correlation between two raters of a person on something like a letter of recommendation is only about .40. That means that different people, different teachers, for example, see different sides of a student and do not necessarily agree. So a way forward is to use multiple sources of evidence—several teachers’ judgments, a student self assessment, other indicators from the record, performance on a “situational judgment test,” and the like—whatever it takes to get more information about the student so it can be used by the teacher, by the policy maker, to help students succeed.
I am Pam Beahan, and I just joined this discussion group. I currently teach GED (all subjects) and English at Jefferson College, in Hillsboro, Missouri. I have been teaching since 1971--mostly English--from the junior-high-school level, on up.
I have always valued, encouraged, cultivated, and hopefully modeled--but rarely formally assessed-- such noncognitive skills as self-confidence, positive attitude, creativity, open-mindedness, and cooperation. Thinking outside of the box and softening the traditional (and therefore, often stressful) classroom climate with humor are also my trademarks, noncognitively speaking.
I have often delighted in watching my students progress in some or all of these noncognitive areas. Their growth and resultant joy have usually been quite obvious to me, and I revel in celebrating with my students their increased ease in learning and their improved scores on cognitive assessments.
I totally agree with Michael A.Gyori of the Maui International Language School and this discussion group that "the very meaning of education" is to "instill in our students a love of learning and discovery."
Patrick Kyllonen: It sounds like each of you is engaged in noncognitive assessment through observation. The issue is whether there is a need for a more formal kind of assessment. I think that is an excellent question. Or, a couple of questions: One is, do self assessments add anything to what a teacher can assess in a student? And another is would a more formal assessment add anything to casual observations, or clinical judgment?
I can't speak for all programs, but for our program, I'm not sure that a more formal assessment is necessary. I realize that it could shed some light on traits we haven't personally observed, but I don't see it as necessary for the class.
Patrick Kyllonen: We did a study where we asked students to rate their own personality, and for their parents to rate their child’s personality, and found that the two sets of ratings did not perfectly agree, but both added to the prediction of school grades beyond what the other rater said. Similarly, we found in another study that student self ratings, and ratings by teachers both independently predicted school grades.
I can't even imagine where I would be now if I had listened to my mother's opinion of my personality traits - or lack thereof! I lived a terrible home life, and I was not a social child, so I am glad that these assessments weren't around when I went to school. In spite of any teacher's remarks about not being social, and my parents' low opinion of my intelligence, I was the first Mastin in 5 generations who even finished high school, let alone college!
My point is that other people's assessments are not necessarily a predictor of success. That's what worries me about assessments like the PPI. Other people's assessments can be biased or prejudicial against the student being assessed. And they can be bought as well. It isn't the same as faking, but the end result is the same - the student being assessed could be written off as not worthy of attending a grad program, or they could be overly rated.
Patrick Kyllonen: In general, in the literature you find that self ratings and others’ ratings are not the same thing—ratings by others are slightly more predictive of outcomes than self assessments are, but both add something unique.
I personally don't believe that other's ratings are so much more reliable.
Patrick Kyllonen: Also the literature on clinical judgment suggests that we’re not always as good in sizing up the other person as we think we are. For example, unstructured interviews are typically found to be uncorrelated with subsequent performance on the job.
Exactly. So why base a grad school's acceptance on something we cannot correlate?
Patrick Kyllonen: So a way forward is to use multiple sources of evidence—several teachers’ judgments, a student self assessment, other indicators from the record, performance on a “situational judgment test,” and the like—whatever it takes to get more information about the student so it can be used by the teacher, by the policy maker, to help students succeed.
I agree with you there, but then we are right back to square one trying to come up with a subjective way of measuring the qualities of a successful candidate.
Kaye Mastin Mallory
Teacher-Kaye & others-Noncognitive variables can be assessed with reliability among judges of the characteristics. In the Gates Millennium Scholars program we get good inter-judge reliabilities on questions designed to elicit responses on the 8 noncognitive variables in the system described in the article you were provided (go to: http://lincs.ed.gov/lincs/discussions/assessment/09noncognitive.html). Raters are trained in scoring procedures. Aside from the specific items for each variable information from personal statements, recommendations and other application materials are also scored. If you know what you are looking for on a dimension it can contribute to a score on one of the dimensions. I agree that having other people judge traits presents problems. With the right stimuli, information on noncognitive variables can be identified and scored. More than 12,000 Gates Scholars have attended more than 1450 different Colleges and Universities, including some of the most selective in the country, with an 85% retention rate. These are all people of color from low income families. In another noncognitive scoring system Oregon State University employs 6 short answer questions and trained raters can score 20 applications an hour and achieve high reliabilities. Oregon State has over 90% White students & has increased the diversity in their entering classes, increased retention, and developed a better sense of community on campus. Oregon State did this with no increase in budget. There are many other examples of successful applications of the noncognitive variable system I and my colleagues and students have developed, including using interviews, essay questions and different kinds of items. It may not always work, but with a plan to make it fit a particular situation chances are it will. And for Teacher-Kaye; I would have liked the chance to evaluate your noncognitive attributes as you presented them, not what your mother thought they were. I bet I could have predicted you would end up a good teacher. Am I correct? Bill
William E. Sedlacek
I'm not sure that *anyone* could have predicted that! :-)
Kaye Mastin Mallory
Actually, I’m confused. Thus far this discussion seems to have focused on interesting distinctions in terminology and conceptualization of various personality traits. But is somebody going to tell us what this has to do with the practice of adult education? That is, acknowledging that people differ greatly on various dimensions of their personality that affect their behavior in many ways – WHAT IN PARTICULAR SHOULD ADULT EDUCATORS BE DOING DIFFERENTLY? Saying that they should be aware of this and somehow work it into the content and pedagogy of their work really isn’t enough. What matters most? And what should teachers/administrators do about it in programs that PRIMARILY teach basic literacy, GED preparation, and life skills ESL (and may teach these in the context of college and work preparation) for 3-6 hours per week ? That is, realizing the traits that make people more effective is great, but how specifically is or should it connected to an enterprise that has as its purpose LEARNING some very high priority cognitive skills and information?
If there IS an answer to this, the personality variable I’d most like to see discussed is “motivation to participate” in formal or formal adult education programs. Every multivariate analysis of success (by any measure) in these programs comes up with a large X factor, which the researchers usually end up calling “motivation.” Darned if I know what it is or what to do about it. I and others usually talk around it in terms of solutions. Do you folks have any better ideas?
I agree with you that there is a lot of discussion of terminology, but I think that reflects the fact that these are really “early days” in our thinking systematically about the roles that noncognitive variables can play in the classroom and elsewhere. But to make the point that it’s not all just philosophical musings, let me just mention a few projects we’re engaging in that use noncognitive variables:
- Admissions—we now have developed for the first time in large-scale high-stakes use an instrument designed for helping admissions folks select students based on noncognitive factors. It’s called the Personal Potential Index, it’s a supplement to the GRE, and it is based on ratings by others.
- Self-help—we have been developing a number of assessments for middle school, high school, and community college that assesses the student, then provides feedback and action plans for that student to enable him or her to succeed. These self-help assessments are in the areas of teamwork, leadership, time management, test anxiety, and others. We’re currently running a field trial to see how effective feedback and action plans are when given to students to use on their own.
- Policy—educational administrators are very interested in the relationship between factors like school climate, teacher-student relationships, homework time, and others on student achievement. We are measuring these factors in a number of large scale assessments, and studying these relationships.
- Drop out identification and prevention—we have just begun several projects designed to look at early warning signs of dropping out, and trying to determine whether there are remedies that can lessen the likelihood that an at-risk student will follow through and drop out of school.
- Students’ beliefs about intelligence—Carol Dweck has identified two distinct attitudes students have towards their own intelligence. One, “entity theory” is that intelligence is fixed and not much can be done about it—you’re either smart enough or you’re not. The other, “incremental theory” is that intelligence is malleable and can be changed through effort and hard work. Guess which one leads to better performance? So we’re involved in a project now to identify whether teachers might hold one of those two attitudes, and if so, what the implications might be.
So, in answer to your question, Forrest, there may be a number of different implications for adult educators based on noncognitive assessments. Number 2 and 5 might be particularly promising places to start. Again, these are early days and a lot of this is research, but I believe, promising research.
Thanks Patrick for that summary of ETS programs in the area. I realize many are new and still being evaluated. Generally, I recommend reviewing and using a range of assessment methods aimed at variety of variables. My work has concentrated on people with less traditional experiences in the society who may have experienced some discrimination and for whom most assessments were not designed. Also, I recommend custom fitting an assessment to your population and situation. This is more difficult to do in a broad-based national program. The variables in my system include handling the system and self concept, as I mentioned yesterday, as well as realistic self appraisal, having a support person, leadership, long term goals, community, and nontraditional learning. Many of these dimensions might have some unique applications & interpretations for the people with whom many of you are working. Hence, scoring and interpreting those scores would likely work best on a school or program level. Many colleges, universities, alternative high schools and scholarship programs have used my noncognitive system and revised and adjusted it to fit their needs. I have many examples of assessing the dimensions and they are all available at no cost. There are also numerous publications evaluating the use of the variables in different settings. I hope this helps our discussion. Bill
William E. Sedlacek
Going back to the eight variables you mention in your paper, I was particularly interested in variables that I think resonate with adult education. The clearest examples for me are: "availability of strong support person" and "prefers long-range to short-term or immediate needs" which is seen in the NCSALL persistence studies and is frequently mentioned by practitioners.
I am primarily involved in the adult education-to-college connection, an area that gets a lot of press lately but still has few resources. If we have to pick and choose strategies wisely, then your work might suggest that it is important to encourage student leadership opportunities, help students connect with specific community service activities, and really encourage students to explore a budding area of expertise. And, the best part, these are areas that can be encouraged from day one, along with literacy development.
Cynthia Zafft, Senior Advisor
National College Transition Network
Cynthia- you raise some good points. First, some noncognitive variables may be more important for some populations. This can determined empirically with research, but you can do a scan and develop a profile on the eight noncognitive variables for your students to see which are important for a group, a program, or individual. Additionally, you reinforce the point that noncognitive variables that have developmental possibilities are most useful to educators. Much of the past work with noncognitive variables may have been interesting but the results did not suggest how we could work with individuals or groups to improve them on the dimensions. I chose variables to study that could be used in teaching, mentoring and individual development. Bill
William E. Sedlacek
Marie Cora wrote: Do you feel that it is important to assess noncognitive skills of your adult students? Why or why not?
I am particularly interested in the non cognitive changes that take place in an adult literacy program. We do not test for these, but there are changes in people's self esteem, their hope for their future, their job skills, their life satisfaction, and their involvement in the community as well as their involvement in their children's education. If
it weren't for the economic downturn, there would be clearer improvements in employment. I have read the reports on various longitudinal studies (notably Tennessee and Oregon) that address some of these social and economic outcomes of literacy programs. We are now asking our students before they leave some questions about this, and we keep track of milestones such as citizenship, a new job, etc. I would still like some better handles on how to measure the so-called non cognitive skills.
Mary V. Gleason
CC Literacy Council
I have enjoyed the postings in this discussion as this is a subject near and dear to my heart. What ever testing methods used, assessment of non-cognitive skills, in my opinion is vital. Not that teaching will always be significantly altered, although any insight into students ability is helpful in delivery of curriculum. It’s real longer lasting effect is on the student and his/her self perception and understanding how they can have an effect on the rest of their lives. Assisting someone who has had limited success in purely academic environments to use their natural skills, or learn new ones to mitigate limitations they thought were permanent is very powerful. Good testing and interpretation is a great foundation for student success.
Rehabilitation Instructional Specialist
Miami-Dade County Public Schools
Hello Discussion Leaders,
First, I appreciated the opportunity to be a tiny part of this interesting and important discussion. Anything we can do to help our students and our community members to be more successful is worth examination and consideration.
Next, from the pre-readings I was struck with the way one university administrator used noncognitive/affective testing as a basis for admitting students to his selective university. And through the varied tracks of this discussion that has stayed with me. I keep thinking about the post-secondary process; it is a selective, top-down operation. Like marbles in an enclosed game, there are limited slots at the top with many marbles competing for those places (and the financial support that keeps them from being tipped out of that top spot). Those that don't land in a top spot, bounce down, and down and down, with more spots opening up at each lower level. Those marbles that enter at the bottom could be considered the ABE students. It's easy to see how noncognitive/affective testing helps winnow out too many marbles, when cognitive learning is assured. And, it's been quite informative to learn how such testing is applied to in industry. But, the research seems mainly to have focused on those marbles that are in the game. What about the "odd balls" that aren't in the game, that don't fit the game's profiles, or for whom game rules aren't adaptive/inclusive?
Thus, after all this discussion, I'd like to hear more directly: What do you see as the benefits of applying more noncognitive/affective testing to these "odd ball" students? What are the negatives? What concerns do you have about the misapplication of testing? What should our society do with/about/for students who drop out (or are dropped out) even from the lowest level of our adult education system? Do you or how would you advocate for change in NRS policy/testing to accommodate your research?
By the way, I find that adult ESL students are often confused with native-ABE students. In your response, please take note that many ESL students don't drop out due to a lack of either cognitive or noncognitive skills. The vast majority are forced to drop out due to work conflicts; family conflicts and transportation would rank as second and third reasons. I would argue that, for most of our ESL students, simply enrolling in and attending ESL classes indicates some positive noncognitive skills. Would your answers change for this group of adult learners?
In answer to your question, how can more noncognitive testing help the students not in the selective institutions, I would like to point out too that of all college students, over half are in community college. Community colleges differ from the traditional four-year institutions in that students are older, much more likely to be “first generation,” (parents never attended college), more likely to have English as their second language, and in many other ways. We have convened a number of panels of teachers and administrators from community colleges and we have asked them how we might best assist the community through research, fair and valid assessments, and related services. The answer has been clear—students in community colleges need feedback, guidance, and action plans on a wide range of noncognitive skills, including time management (setting goals, organizing time and tasks, planning ahead), controlling of math and test anxiety, effective test taking strategies, teamwork, and study skills. We are currently conducting pilot tests with local high schools, a GED program in Philadelphia, a middle school program with ESL students, all designed to evaluate whether our interventions are effective in teaching some of these skills. We have good reason to believe that interventions should be effective, based on the research that was identified in the paper I circulated earlier this week. We hope to have results to share soon.
Patrick and Everyone:
As is my wont, I see too many variables and loose ends after reading through this week’s thread and am not sure how much progress has been made toward building any consensus on this. Moreover, something specific that I haven’t see much of is an answer to this question: what do we do with the results of any types of assessment of noncognitive, affective, personality skills and traits? Even if we could agree on the nomenclature, on the purpose of any assessment, on the variables to assess, on how to conduct the assessment, and on how to interpret the results [yes, a whole lot of obstacles], it would be rather useless if we don’t have ideas, plans and resources for addressing the results. Intuitively, I see the need for some generalized attempt at assessment, but what can cash-strapped institutions do?
I believe that Patrick began to talk about this in his message above and I quote part of that:
“The answer has been clear—students in community colleges need feedback, guidance, and action plans on a wide range of noncognitive skills, including time management (setting goals, organizing time and tasks, planning ahead), controlling of math and test anxiety, effective test taking strategies, teamwork, and study skills. We are currently conducting pilot tests with local high schools, a GED program in Philadelphia, a middle school program with ESL students, all designed to evaluate whether our interventions are effective in teaching some of these skills.”
More needs to be discussed about how our meagerly-funded system can conduct effective “interventions” so that those who are evaluated in need can improve and thus do better on the educational pathways, whatever those might be.
Allan D. French
ESL Instructor and Assessment Coordinator
Basic & Transitional Studies Division
South Seattle Community College
I think tests like those under discussion can be extremely helpful, but I doubt that they'd get much traction with many of our students ("I just want to get my GED!"), and I don't think many programs can afford to buy formal diagnostic assessments that aren't required by the state or federal government anyhow. However, I do think that teachers are in a constant state of formative assessment of noncognitive skills -- every time they talk with students about persistence, progress, or students' barriers to success -- and it would be very helpful to have a framework and a vocabulary for recognizing noncognitive issues and for discussing them with students (and then going on to help students develop more effective noncognitive skills).
All the things I've read as background for the discussion have been useful, but they're very complicated. I'm not sure, but I would guess that most teachers are like me in not even having heard of "the big five" before this discussion. Our teachers work hard, but they are part-time and not paid a lot; I need to be able to hand them something they can use pretty easily. Is there a simple rubric for analyzing, on an ongoing basis and in context, what noncognitive 'stuff' is going on with a student and then helping the student in the relevant area? Assuming that we aren't going to be able to implement a formal, mandatory NCSA anytime soon, what are the most effective practical strategies we can adopt today?
Debra Morris Smith
Parkway Area AEL
Thank you, Debra, for taking the time to articulate my concerns and perspective so concisely. Would love to hear what's out there that is usable for the type of adult learners to whom Debra is referring. I'm sure that many of us are teaching in programs that transition GED grads into community colleges, and our Colorado SUN/College Connection program is one such program. The anecdotal evidence alone cries out for us to help students first become cognizant of their noncognitive strengths/weaknesses and to then help them develop them further so that they can be more successful when they
enter post-GED studies.
I actually think that the 8 variables named by Bill (and the simple assessment that goes with them) provide a possible framework for adult educators that might be helpful in a number of ways. We would need to study them systematically...and (most likely) locally...to see which variables are most closely associated with persisting and succeeding for our students. Eventually, after some careful analyzes, we could formally redesign some of the educational experiences to take advantage of what we learn.
For example, during an intake interview we might probe more deeply into the area of "leadership experience" (church, sports, etc). Can we encourage students on a budding leadership path by giving them additional tools and/or experiences while they are with us? Can we encourage and support students to try out the role of leader, if they haven't before? As there a student leadership group in the state? I think this happens to some extent in programs already...but it might not be a formal part of the student's experience. Then, we would need to see if providing leadership tools and/or experiences makes a difference in their rate of success with us...and beyond. We are starting down this path of looking beyond academic preparation at the National College Transition Network ( where I work; a non-profit group of adult education practitioners, researchers, policy-makers interested in adult education to college transition).
Regarding the comment that students just want to get their GED:
Although students say that, something strange happens when they sit down to actually take the GED tests. About 60% say they are taking the GED to continue their education (a greater percent than those who identify personal satisfaction or employment as their reason for getting the GED). That seems to hold true for those who pass the GED and those who don't. Now, it could be that those individuals who aspire to continue their education are the people we never see. People who just walk into the testing center and pass the GED. Or, perhaps, something clicks inside of people when they are at that "moment of truth"...they really do want to go on.
I forgot to include this link on student leadership councils:
I should clarify what I meant by the quote "I just want to get my GED," I didn't mean our students don't want to pursue postsecondary education, whether technical, vocational, or academic or that we don't talk with them about their goals and options; I meant they are often resistant to classroom activities that are not directly and visibly relevant to getting enough right answers on the GED Tests to earn that credential. Getting them to focus on their noncognitive barriers to success will require a sales job on teachers' part -- it's relevant, but we'll have to show them how it is, and that might be more easily done in context over time and informally than officially in a formal assessment at intake. I apologize to the list for the distraction -- I didn't mean to sidetrack the conversation about noncognitive assessment with what looked like a Transitions topic!
Debra Morris Smith
I share this experience - but, we do all we can to point out the economic reality of "just getting a GED", and do not let a student pass through without showing them the wide range of short-term and long-term educational opportunities that they can pursue and break through the limitations of "just getting a GED". Fortunately, our community college has a number of short-term programs that provide 4-8 wks of training and $15/hr starting pay that also bridge into a degree and further economic opportunities.
In reading the posting, I had been thinking exactly what Debra Morris Smith wrote...
"A sales job by the teacher" is right on spot with my thinking. I work as a teacher in a residential rehab facility. My GED students are recent HS drop outs and are typically not eager to begin the GED process. The first thing I do is talk about learning styles and multiple intelligences. I want them to know that I understand how everyone learns differently - in their own unique way - and I want to be able to teach to their learning style. After my "sales pitch" most are more than willing to take a few alternative assessments before we start the GED process.
My apologies, also. I didn't mean to take us off track but what the side trip did get me thinking about is this: Here we are thinking about skills/attributes that might predict success in undergraduate and graduate university settings...are these also the skills/attributes that predict success in adult education? Are there others? Do any of us now have ways that students can self-assess themselves, for instance, on how they "overcome challenges and setbacks" (from PPI)?
It does apply to adult/vocational education. For years, as a vocational rehabilitation counselor I used reports from psychologists that contained information both from cognitive and non-cognitive tests and observations to help clients make vocational choices. These tests helped the client to see non-cognitive skills in a way that could help them affect their low -performance on cognitive tests. When used in the context of a learning process not just standardized educational tests results, non-cognitive skills can be a very powerful tool for learners. Additionally, I almost always found the test results to be an accurate predictor of the educational success of my clients.
Rehabilitation Instructional Specialist
Our College Connection/SUN program has raised the bar exponentially for GED grads entering community college--we have the grads who are in college (which is co-located in our building) interacting with our current GED Ss--no question that we have many more Ss entering college because the GED Ss *see* their peers persevering and succeeding in college--far more powerful than teacher support alone!
Assessment- Cognitive and non cognitive, it's about continual change, at every level in increments not always measurable. Who's to say when a life event or informal/formal assessment result will impact one's thought or developmental process,(positive or negative) long after a student has left the formal learning environment. Eventually, we all have a series of life "light bulb" experiences- the ah-ha's of I need, now. Transition from some point to another is life-itself. It's one of the many reasons I'm hooked on the delivery of the concept of life/work career development as a process- not a destination.
A process that we will re-visit as often as we feel, think or sense a need(s), in order to move on, ahead or just survive. "I just want to get my GED", is one point( one need) in the process of recognition that change is necessary, possible, and desired. When the GED is accomplished- may often lead to: " I can do this- so, what's next?!" The wheel can be applied to learning levels and situations in every life domain. It's all about readiness and need for change; planned or unplanned- basic adult learning principles. See page 3 at: www.ncesc1.com/lmi/publications/Career_Choices.pdf
Putting a framework around change may help make it manageable. Once we obtain a desired goal, the process starts all over- with every experience we have following the attainment of that goal. High-quality decisions are often helpful as it helps one recognize readiness, and prioritize needs and goals, when there appear to be so many. www.careerkey.org/asp/your_decision/high_quality_decisions.html
How can the above wheel be applied to your learning domain?
Janice L. Hastings, Director
Literacy Resource Center,
Academic & Student Services, Basic Skills
NC Community College System
Addressing Cultural Differences
(Includes information on the National Conference on Effective Transitions in Adult Education taking place in November 2009)
Since I am coming at this from the standpoint of an ESL teacher, I would like to think about adding Cultural to your Philosophical and Biological categories. When we teach American culture, we often use the metaphor or an iceberg where only a small amount is visible. We can reverse this for teachers since it is impossible to grasp the fine points of the many cultural groups we encounter. As examples, it might be hard to test for leadership with a person who has been taught all his life to go last and to deny compliments or praise. This is a difficult thing to teach international students who are applying for jobs in the U.S. A student who is operating on his primary culture values is at a great disadvantage competing against aggressive American candidates, who are taught to "say something good about yourself," "toot your own horn," and make the first move."
I would love to hear some thought about addressing cultural differences in a non-cognitive way.
I am very interested in your comments about cultural differences. For one thing, we are in the middle of preparing an international survey that will be translated into over 70 different languages, and will ask all kinds of questions to students concerning their confidence, self-esteem, anxiety, and so on. We are very concerned about asking those questions in a culturally appropriate way. Also, at my place of employment we have many, many statisticians, teachers, computer scientists, and others from other parts of the world--India, China, South America--so I can see firsthand and every day the effects of the kinds of cultural differences you're talking about--especially the "toot your own horn" issue. The Australians here call it the "tall poppy syndrome"--the tall poppy is the one that gets cut down first. But it's not just international, it's individual as well. Some Americans don't believe in tooting their own horn, and some internationals quickly learn that if you don't toot your own horn no one else will. In general, while I realize there are tremendous cultural differences, I think we sometimes fail to appreciate the wide range of individual differences as well, and in either event, individual or cultural, we fail to appreciate how adaptable and teachable we all are to new circumstances. What works in one setting might not work in another, but a student, international or domestic, simply needs to be exposed to that, to be given an opportunity to see what works, and what doesn't, and make adaptations appropriately. People at my place of employment, international workers and Americans, see that many of the most successful individuals are the ones that show up to work on time and meet their deadlines, but also take the effort to approach others and communicate their needs and interests, and also listen to the other's needs and interests. For some, this kind of approaching activity is easy, and for others it's like any skill, it requires lots of practice--attempts, failures, getting up off the mat and trying again.
I thought that Patrick's response was so thoughtful and speaks to all of our students that I passed it on to our entire teaching team.
Sue - Your comments and those of others on cultural differences are critical to understanding why noncognitive variables correlate with success in education, the workplace, and throughout any society. We need to measure the abilities that people have in the context in which they were learned. For international students, those who have learned to work the system in their cultures are more likely to be taught things about the new system they are in and how to work it to their advantage. Those that have not learned to negotiate the system in their native cultures will likely require a different approach. Part of this is to learn how to handle the prejudices and stereotypes that limit their development. As I have said previously, I believe we need the kind of assessment of these abilities that can be focused on the particular cultures of the students with whom you are working. I think this requires that we adapt our measures at the local level. I do not think it is likely that we can pick up on these dimensions with a national test. The work of my colleagues, students, and I have focused on cultural differences including international students. The 8 noncognitive variables and how to measure them that we have developed do consider culture, race, gender, and other aspects of diversity. Check out the article that was provided (go to: http://lincs.ed.gov/lincs/discussions/assessment/09noncognitive.html) as part of this discussion and/or raise some other questions- Bill
William E. Sedlacek
Sue Jones and all: Regarding your observations on culture, I think you are right on target. In this regard, the workshop also includes information on longitudinal changes in personality (non-cognitive) traits across the lifespan and in different cultural backgrounds. This is also included in another free workshop that I am presenting this academic year called “Bridging Cognitive Science and Adult Literacy To Life: Focus on Longitudinal Studies Across the Adult Lifespan”. In this workshop I focus on the results of a variety of qualitative and quantitative longitudinal studies from different nations that track changes in the same adults over periods from 2 months to some 40 years. These studies lead to an understanding of how adults who are considered undereducated and poorly literate, and who are given opportunities to learn, work, and grow, actually change their cognitive abilities and social status, as well as that of their families, across time.
Extensive data, both qualitative and quantitative, are reviewed showing how intervention in the lives of undereducated adults can lead to increased education, employment, and income in comparison to non-interventionist adults. Implications are drawn for working with adults from diverse backgrounds including age groups from young adults, to those at mid-life, and on to those of older adults with cognitive skills that may be in decline.
Both of these workshops provide new insights into adult cognitive and non-cognitive development, the intergenerational transfer of these types of brain processes from parents to children, and implications for assessment and instruction in adult education.
For more information on these free workshops contact me at email@example.com.
I think what is missing in this conversation is the effects of culture on the AEL students. The cultural pulls that our students are caught up in can shape their personalities in so many diverse ways. If American GED students live in a isolated and deprived social economic community their world view and their sense of their place in the world can be so very limited, in relation to other students who might have more opportunities to discover their own strengths and talents. Yet, they will have very distinct roles in their own community that others might not recognize that were not familiar with that particular community.
I teach ESL and many times the cultural norms in the student's own native culture can be in stark contrast to the norms in main stream American Society. I remember in one of the readings that individuality was a strength that was identified as a positive attribute for a student to possess. In many cultures the group would be the more significant and any individuality would be seen as a negative. Same goes for leadership. I also think that the teacher who is interpreting the data from any test or assessment would need a lot of prior knowledge of these cultural pulls to interpret the data correctly. These attributes that have been identified as helping students achieve in our mainstream society might very well unacceptable in their communities and cultures.
Hello Karen and all,
I believe that test bias or cultural embeddedness is in need of far more attention than it is currently receiving.
Michael A. Gyori
Maui International Language School
I agree with your comment. At ETS and I’m sure this is true at many testing organizations, we have a fairly elaborate set of procedures for dealing with test bias and cultural embeddedness. I’m sure it would be possible to do more, but one of the things we hear time and time again from visitors (for example, our annual visiting scholars program http://www.ets.org/ which targets underrepresented groups) is comments like “I had no idea ETS took such care with fairness in its tests.”
See this site for a number of relevant publications on the topic. http://ets.org/
I know the large testing organizations do give great thought to weeding out questions with bias. It is so much better than when I started teaching ESL in the 70's. At that time we had a test for beginning Vietnamese students where we were to show the student a picture and ask what was happening. The picture had a family at a table. A shiny chrome thing was on the table, as was an orange pitcher, and a box of Cheerios. Why weren't we surprised when hardly anybody knew the family was eating breakfast when a Vietnamese breakfast would be Pho.
Up until this year we were giving a Casas in which question number one was a lengthy listening item about layaway. Until we finally noticed that everybody was missing number one and looked into it, we didn't realize that, while layaway may be common in California, where the test was normed, it is virtually unused in the Midwest.
Personally, I would concur that cultural biases exist in standardized testing. There are a couple of topics touched on in this discussion that are more pronounced in American education than it would seem in any other. Such as the "just get me through the GED test" group of folks as to compared to “what will this allow me to do long term”. I think test structure - standardized or not - seems to guide students into how they believe learning will effect their employment and social potential. Adding poverty and race adds to the dilemma.
4th Grade Teacher/AEL
One of the "eight variables" in Bill's paper is "successfully handling the system." Bill, I am assuming that means that nontraditional students are more likely to succeed if they understand the system they are moving in. How much does that also mean changing to fit the system? I know there is some evidence, even from biology, that it isn't all about survival of the fittest. Do students need to change or can just knowing how the system works be enough? What do folks think?
Cynthia- You is several steps ahead of me. I am looking forward to meeting you at the November conference (see below). Perhaps others on this list will be there. You may want to announce what that is for those who don't know about it.
You are correct in assuming that sometimes we may have to help a student adapt to fit into a new system, other times we can help someone find that part of the system that works best for them, and other times we need to change what we do and reduce the barriers to developing that student.
By considering several of the noncognitive dimensions in my system simultaneously, we need to also keep the self concept of the student positive, help them with realistic self appraisal (sometimes they can't do everything they would like to do), help them find a community & a support person, help them to show leadership in this new context, including how they may have been a leader in a previous cultural setting, help them set long-term goals that may include returning to their previous cultural context, and help them realize that there might be new barriers to returning to a previous cultural setting. Those of you who have worked with students from other cultures will recognize these points. What I am suggesting is that there is a way to assess these dimensions to help you organize your efforts and demonstrate progress to yourself and others. Bill
William E. Sedlacek
If you are able to get to Rhode Island in November, please come and join us. We are very serious about addressing both the academic preparation of adult learners and these other factors we have been grappling with this week.
In fact, Bill Sedlacek is our keynote!
National Conference on Effective Transitions in Adult Education "Helping Adults Succeed in Postsecondary Education and Training" November 16-17, 2009 Crowne Plaza at the Crossing Providence, RI
For more information, visit our conference webpage at http://www.collegetransition.org/conference09.html
See you there, Bill,
Marie Cora wrote: Do you feel that it is important to assess noncognitive skills of your adult students? Why or why not?
I would like to focus on a particular situation in which assessing non-cognitive skills is very important, distance learning. For distance learning programs to succeed students may need to be highly motivated, comfortable with technology (or fearless in overcoming technology obstacles),skilled at managing themselves and their learning, able to learn independently, and willing to ask for help.
There are some college level or generic online NC self assessments for those who are considering distance learning. I am interested in knowing about online learning NC assessments that are specifically designed for adult literacy (including ESOL/ESL) learners, and I wonder if any of these instruments have been reviewed or studied by our guests or others.
Distance learning isn't for all students, and successful distance learning students need cognitive and non-cognitive skills as well as certain attitudes.. It's important to know what these are, and it's important to have good instruments that help adult learners (perhaps with their teachers, tutors, mentors or coaches) make good decisions about whether a distance learning option is the right choice for that learner.
David J. Rosen
I think David’s idea about NC assessment and distance learning is very interesting. I think there might be something worth pursuing there. There are several projects in Germany (Aachen and Frankfurt) and Austria that use noncognitive self-assessment, low stakes, as part of the admissions process. Students take the assessment—which might include an interest inventory, a self assessment of academic abilities, and a self assessment of personality and attitudes—and then are given normative feedback that gives the students a sense for what this major is all about, and what the students in this field are like. For example, the student might discover that he or she is really not all that interested in engineering after all, or that the students in this field prefer to work alone, whereas I might prefer to work with others, etc. This self-screening process turns out to be very successful as a way to reduce dropping out. I wonder for those of you who do distance learning, whether it is conceivable that a similar sort of approach could be taken with distance education. Students could take self assessments on their interests, along with personality type measures. Similar assessments could be given to those who are successfully completing or have completed a distance education regimen. A student could discover whether he or she in fact likes to work alone without much supervision, is willing to work at odd hours, is extremely motivated, and has whatever it takes to succeed. Some students might find it useful to be given an opportunity to reflect on these kinds of factors and think carefully about whether the student wants to go through with it.
My host-sister in Germany told me that it was standard to be given a test at the end of the elementary school days that encompassed cognitive and noncognitive skills and prescribed the next course of their education. They were sent to middle schools (Gymnasium) focusing on technical or vocational education. If you are (psychologically) predetermined (predestined?) to be an engineer, then you went to the school that gears its classes towards that. The test that they use might be interesting to study.
My host-sister ended up in a vocational school, while her brother ended up in a technical school. They both got good grades and were quite satisfied with what they were studying. At the time, I thought it was a little too prescriptive, but both of them are quite successful adults now.
Kaye Mastin Mallory
Dr. Kyllonen and Dr. Sedlacek.
I wonder if there has been research on the relationship between economic class and NC skills such as negotiation, assertiveness, navigation of institutions and processes such as the U.S. healthcare system. Is there evidence that low skills in these areas correlate with being raised in families with low incomes?
David J. Rosen
I don't know the answer to David's question about the relationship between class and the NC skills that he mentioned. Certainly it sounds plausible. And I do seem to recall some class related differences in the medical realm (e.g., SES [Supplemental Educational Services] and compliance). But it should also be noted that there does seem to be smaller class and race/ethnicity related differences in performance on assessments of noncognitive skills compared to the traditional cognitive (e.g., math and reading) types of skills. An American Psychologist article (Sackett, P.R., Schmitt, N., Ellingson, J.E., Kabin, M. (2001). High stakes testing in employment, credentialing, and higher education: Prospects in a post-affirmative-action world. American Psychologist, 56(4), 302-318.) summarized some of this literature and suggested the use of noncognitive skills in high stakes assessment as a strategy for reducing adverse impact in selection and admissions contexts.
David - Here is where my suggested noncognitive variable of "working the system" or handling an ism becomes important. The US society and all others tend to develop in a way that those with income and resources are on the inside and they do things to keep it that way and people with less income and education are kept from knowing what to do to enhance their situations. Health care is a good example. For people with money, power and high paying jobs health care works fairly well. For others it may not. There is a correlation with income and race. Helping people to see that and learn how to handle health care is a skill that can be taught if one sees it that way. Those that know that and are learning or trying to learn about how the system works for or against them would score highly on this noncognitive dimension- Bill
William E. Sedlacek
To All- An additional comment- Measuring noncognitive variables takes some skill to avoid "faking" or other socially acceptable answers. Currently it can be done with some knowledge & effort. If we ever get formal helping courses on noncognitive variables it will be more difficult but at the same time the more people learn about how to develop on these variables the better off the society will be and those developing ability measures will have to go on to something else which is a kind of progress for all. Bill
William E. Sedlacek
Hello Drs. Kyllonen and Sedlacek and Tom Sticht:
I apologize for being late to this discussion; but if I may, I'd like to at least raise the question of the use of the term "non-cognitive" to associate with any skills, "aspects of psychological mental life," or even personality traits.
The point I am making is philosophical rather than psychological (which I think Tom Sticht has overlooked in his correspondence on the list). Of course the fact that "there is obviously a lot of fuzziness in the borders between cognitive and noncognitive, and between skills, attitudes, and dispositions" does not mean we cannot make psychological distinctions theoretically. We can, we have, and we should. And perhaps psychologically the terms may be helpful. Though when used in commonsense language, or even for many field specialists, the distinction between cognitive and non-cognitive, even if not thought out well, carries with it some serious philosophical implications.
That is, philosophically, conceptually distinguishing cognitive from non-cognitive suggests a division in learning and knowledge development that, I am suggesting, cannot be supported by the evidence. The distinction **as philosophical** goes way beyond fuzzy to non-existent. For instance, how can we say that being conscientious (or any of the traits, etc.) is "non-cognitive," or has little or nothing to do with learning or knowledge development and support, or horizon-expanding through learning, or of personal and ethical development connected with learning (and its intimate relation to what we mean by "cognition")?
There is of course, remote and proximate development, and many other ways to carve out the legitimate psychological distinctions that we need to help us on our journey to understanding ourselves and what it means to be human. However, philosophically, to relegate those legitimate distinctions to the "non-cognitive" often means to relegate them to a philosophical dead space--where these issues have little or nothing to do with integrative cognitional activities--which every thinking person has--or with what we consider as known, knowable, or real; or that knowing (epistemology) (for instance, for the conscientious aspect of the person, or whatever skills or traits we want to consider here) with and its back-and-forth relationship to knowing and reality is "non"-existent--not real.
Philosophically, the division is wrong-headed and (I am suggesting) passes down to others (who are or who are not so involved in specialties) the philosophical problems associated with the enlightenment thinkers. Though we haven't resolved these issues yet as unifying or correlating our fields, we need not pass them down to others as unquestioned divisions in our theoretical excursions.
Department of Education
San Diego, CA
Catherine King and all: There are lots of ways of thinking about the philosophical foundations of cognitive and non-cognitive aspects of mental life. The paper by Green cited below with his abstract and a brief excerpt makes this point clearly. It also makes clear that there are differing philosophical positions related to cognitivism and non-cognitivism. This debate has not been closed, so others must move along as best as they see possible. The philosophers cannot rescue us.
Clearly, too, there are non-cognitive neural processes, such as those of the autonomic nervous system that, while not considered part of mental life, nonetheless influence our mental life in differing ways.
So there are philosophical and biological factors as well as psychological factors involved in understanding distinctions among what we call cognitive and non-cognitive mental processes. All this tends to make talking about things like perseverance, self-efficacy, motivation, conscientiousness, etc with a degree of vagueness. Fortunately, however, most of the adult educators with whom I have worked are quite able to understand the distinctions made by the use of words like cognitive and non-cognitive and understand that they do not refer to totally distinctly different aspects of mental life. But they help to form different, though related, categories for talking about aspects of mental life and for following different procedures for teaching about these various aspects.
In my workshop entitled "Adult Literacy: A Focus On Cognitive and Non-Cognitive Skills and Behavior With Children’s Picture Books By Leo Lionni" I use children’s books to illustrate the eight non-cognitive factors that William Sedlacek has outlined. This helps adult literacy educators and early childhood educators address these non-cognitive factors with the parents they work with and it helps the parents work with their children to better understand the non-cognitive factors. Thus we use cognitive processes to better understand non-cognitive processes!
As I mentioned earlier, in Tom Bever's phrase, each is the sea in which the other floats!
Use Google to see the following:
Where Did the Word "Cognitive" Come From Anyway?
Christopher D. Green
Department of Psychology
North York, Ontario
(1996) Canadian Psychology, 37, 31-39
Cognitivism is the ascendant movement in psychology these days. It reaches from cognitive psychology into social psychology, personality, psychotherapy, development, and beyond. Few psychologists know the philosophical history of the term, "cognitive," and often use it as though it were completely synonymous with "psychological" or "mental." In this paper, I trace the origins of the term "cognitive" in the ethical theories of the early 20th century, and through the logical positivistic philosophy of science of this century's middle part. In both of these settings, "cognitive" referred not primarily to the psychological but, rather, to the truth-evaluable (i.e., those propositions about which one can say that they are either true or false). I argue that, strictly speaking, cognitivism differs from traditional mentalism in being the study of only those aspects of the mental that can be subjected to truth conditional analysis (or sufficiently similar "conditions of satisfaction"). This excludes traditionally troublesome aspects of the mental such as consciousness, qualia, and (the subjective aspects of) emotion. Although cognitive science has since grown to include the study of some of these phenomena, it is important to recognize that one of the original aims of the cognitivist movement was to re-introduce belief and desire into psychology, while still protecting it from the kinds of criticism that behaviorists had used to bring down full-blown mentalism at the beginning of the century."
Part of Green's paper states:
"As fundamental a change as Moore's
"non-naturalism" (as it was called) was to ethics, it did not undercut the general belief that moral claims are either true or false. By the 1930s, however, even the modest assumption that moral claims are true or false came under attack by a group of philosophers who came to be known as "noncognitivists." Noncognitive ethicists believed that moral claims are not about matters of fact at all; that, indeed, there is nothing-natural or otherwise-for them to be true or false of....."
It’s never too late to join this discussion, that’s for sure. I have a feeling it will go on for quite a while, long beyond this week.
I agree with you that noncognitive is not a good term, and one reason that’s true is that it sounds like it’s not important for education, which after all is all about increasing one’s cognition.
When our research group (on noncognitive attributes) started a few years ago, we were given the challenge of showing that these “noncognitive factors” were important for education. It is easy enough to show that they are important in themselves, for example, everyone agrees that subjective well being (i.e., happiness) is important in and of itself. But the question is, are noncognitive attributes important for cognition, for example, do they effect grades and standardized test scores. We now know from lots of research (several meta-analyses) that noncognitive attributes actually contribute to cognition. That is, to take one example, persons with high levels of conscientiousness (“meet deadlines” “work hard” “plan ahead”) do better in school than persons who are just as smart (whose cognition is just as high) but with low levels of conscientiousness. Anyone who has taught knows about a gifted person who just doesn’t work very hard, or, conversely, one without the natural gifts who is determined to make it, no matter what. James Heckman has referred to the former as “wise guys.” Tom Sticht talked a bit about the research on GEDs that led to Heckman’s comment.
To All-The overall discussion is interesting on a number of levels, and that is a point I wish to make. There are many ways to look at noncognitive variables, philosophically, empirically, educationally, developmentally, semantically, practically etc. None of the perspective is correct and the others incorrect. If we can see multiple orientations to the topic, we may be able to choose one or more that fit our needs. Teachers working with adult students may want to focus on how noncognitive variables may help with their goals for their students whatever they are called. Those trying to measure the variables may concentrate on validity & reliability issues as they are commonly defined. Others may want the terminology to make sense, and so on. So choose one or more perspectives and help those with similar foci to understand and use them in their ways. Good luck- Bill
William E. Sedlacek
We do not do any formal NC assessments, yet clearly Dr. Kyllonnen is correct when he states in his article that ”persistence, tenacity, collegiality, communication, and enthusiasm” are hugely important to success as an adult in both the professional and personal worlds. Many adult ed teachers can and do pick out these traits either because they are well or poorly developed and then try to respond appropriately as teachers.
What to do, though, about the problematic issue that Dr. K raised later in the article that the self-rating approach so often leads to “fake” answers?
We are doing a number of things to deal with the faking (and the related coaching) issue:
- Ratings by others—for high stakes tests, especially if there can be a coaching industry, this seems to be the only safe approach right now. But ratings by others (peers, supervisors, teachers, parents) are interesting data in themselves. It seems that others’ ratings tend to be more predictive of outcomes than self ratings are, on average, but each provides unique information.
- Situational judgment tests—these are tests where you provide a situation to the examinee (e.g., imagine a group discussion where one of the members seems to dominate, but doesn’t seem to have good answers, how would you deal with that), and then ask him or her how they would handle the situation. These sorts of tests are becoming very popular in industry for a variety of reasons (one is that you can map the situations onto the job contexts fairly nicely to get “content validity”), and are becoming increasingly popular in education—the College Board recently developed and pilot tested some of these.
- Forced choice tests—where you ask the examinee (for example) which is more true of you, “work well with others” “meet deadlines,” where the forced choice is between two equally desirable or attractive attributes. The U.S. Army is currently pilot testing this approach for large-scale military aptitude testing, which is large volume and high stakes, and for which there is a large coaching industry.
- Experimental approaches—there are a number of these that are really in the more basic research stage, such as reaction time methods and the like. No one knows if these will prove practically useful for assessment, but we are experimenting with many of them. They include assessments such as the conditional reasoning test (L. James), the implicit association test (A. Greenwald) and others.
The forced choice tests really bother me because I think that they won't or can't really give valid data because the choice *is* forced. That type of test would also lend itself to coaching, don't you think? For example, if you happen to know that deadlines are really important to whatever you are applying for, then choose that answer. If it is a team position, then working with others is really important. Slanting the test toward one's goals doesn't mean that one has those qualities.
Also, being forced to choose between "work well with others" and "meet deadlines" when one does neither particularly well isn't really a measure of that person's personality. So he checked box A. That doesn't necessarily mean that he has that personality trait.
Kaye Mastin Mallory
The forced-choice idea is pretty tricky to get right, that’s for sure. It’s an old idea but the fundamental problem with it has always been the one you put your finger on—If you force me to choose between “works well with others” vs. “meets deadlines” and I’m terrible at both, how can you tell the difference with someone who is great at both but chooses the same answer (the “ipsative data” problem). A solution being tried out in a U.S. Army field trial now (based on work by Stark, Drasgow, and Chernyshenko) is based on the idea that any dimension, such as conscientiousness, runs from extremely low to too extremely high, and there is an ideal point where you see yourself, somewhere in the middle. So it’s possible to be “too conscientious,” for example, you might meet deadlines at all costs, even if something more important is competing for your attention, and that would not be good. So the testing system finds your ideal point on all these different personality dimensions. We’ll see how the system works out in the field. So far though, although I agree with you that forced choice is problematic for the reasons you cite, in actual practice it seems to give the same conclusions about someone’s personality as the normative (not forced choice) format does. For example, Dave Bartram presented data his consulting firm collected on the two formats and they were very similar to each other, and they made similar predictions about future performance of the examinees.
Does anyone know if others outside of the University of Oregon can access their noncognitive assessment tool and scoring methods? I think the six questions would be fabulous they provide ...
Kathleen-a number of other institutions have employed Oregon State's system. Contact Michelle Sandlin , OSU's Director of Admissions for more information. The school has been very generous in sharing their information.
She can be reached at: firstname.lastname@example.org
William E. Sedlacek
Thank you! One thing about educators, we share well!!!
Here is the URL for Oregon State as well as the resource discussed above:
it is the Undergraduate - Application Worksheet for Insight Resume
The six categories and questions are listed on a PDF.
Thank you for that link, Laura. In reading through those questions, I see where coaching could really slant their essays into ones that would most likely get them accepted to the college.
Kaye Mastin Mallory
Not sure if this is what you are looking for, but I found this U of Oregon site about the Big Five with a couple of testing instruments