Assessment: The Burden of a Name
Bernard L. Madison, University of Arkansas and Mathematical Association of America
A ballad by Johnny Cash, A Boy Named Sue, chronicles a boy's growing up and the hardships that ensued because of his name. Fighting in bars and taverns and withstanding the insults of detractors seemingly give the boy character and strength as he becomes a man. However, the ballad ends with the main character's avowal to name his own son "anything but Sue!" An analogous ballad might someday be written about assessment.
Thrust onto the US higher education scene in the final two decades of the twentieth century, assessment continues to suffer mightily from misunderstanding, much of it because of the burden of its name. The other weighty contributor to this misunderstanding is assessment's cadre of early promoters -- administrators, governing boards, accrediting agencies, and legislatures. Most college faculty believed that assessment was, as the name implied, some kind of comprehensive evaluation. They knew, as did every farmer, that weighing one's produce did not hasten its readiness for market. They also knew that the motivations of the promoters of assessment were anchored in evaluation and accountability. So the lines were drawn and assessment has struggled against these misunderstandings to gain both respectability and usefulness in US higher education.
Struggling with the Name
Efforts have been made to modify the assessment rubric to better convey its true meaning. We distinguished between summative assessment and formative assessment to try to clarify why assessment is done. We resorted to assessment cycles to imply that assessment was a continuous process rather than a discrete event. We added prepositional phrases to clarify the purpose when we talked of assessment of student learning and assessment in the service of learning. We tried to distinguish kinds of assessment by referring to classroom assessment, large-scale assessment, and alternative assessment. Grant Wiggins authored a
Assessment of student learning, even large-scale assessment of learning in an entire academic program is not new to US higher education. In the early years, comprehensive examinations, some using external examiners, were the norm for college degrees. The expanding enrollments of the twentieth century made large-scale assessment of learning in academic programs less practical. Consequently, most assessment of student learning was bound up in course grades, using what we now call classroom summative assessment. Most of those course grades depended on a one-dimensional evaluation process - periodic in-class examinations and sometimes final comprehensive examinations over the individual courses. Many of the current collegiate faculty grew up with this assessment scheme and found it reasonably satisfactory, so there was no groundswell for change from that faculty. Yet, that faculty had continued to practice comprehensive large-scale assessment in some of its programs using multi-dimensional measures of learning and achievement. The programs that attracted this comprehensive assessment most often were the terminal graduate degree programs, typically the doctoral programs.
Assessment Under Other Guises
Consider how doctoral students and new doctorates are assessed. Many times, course grades are not determining; most grades are A's with a few B's. Doctoral students are judged by their participation in seminars where they listen, discuss, and present. They are almost constantly in conversations with graduate faculty and potential thesis directors, being judged on how well they understand and being coached in areas where they need help. They are tested by faculty committees, in presentations ranging from thesis design to oral examinations. They sit for written examinations over a range of courses and subject areas. Eventually they participate in a significant capstone experience, writing and defending a dissertation. The assessment of achievement of doctoral students continues beyond the doctoral degree, to their employment successes (e.g. achieving tenure) and their publishing records. Most discipline faculties have no doubt who their successful doctorates are; they have elaborate assessment processes that tell them. And with each doctoral student, the process of educating new doctoral students is refined and improved. Thus the assessment is formative, or an assessment cycle. Perhaps this is one reason why US graduate education is indisputably the best in the world.
So, if discipline faculties use these comprehensive schemes for their doctoral students, why not use it for their undergraduates to assess their learning in general education or in a major? The major reason is that there are too many students to handle as they do their doctoral students. Yet, most faculty do practice formative assessment, albeit unknowingly and casually, in their classrooms.
Even in the outmoded and discredited lecture method that most of us still use, formative assessment is often very much present. As we lecture, we survey the faces, looking for signs of understanding or puzzlement, and we adjust accordingly. Some of us sprinkle our lectures with generic questions such as "Do you see?" or "Is that clear?" I can remember professors of mine who inserted such a question randomly and frequently, to the point that counting the number of occurrences of the question in the lecture became an amusement. Often times, though, these questions represented a subliminal obligation, and were not asked to elicit an answer. They were, however, a recognition that a part of teaching is gauging understanding and responding to that with changes in instructional methods. The perceived lack of time prevented a more substantial judgement of learning and more substantial analyses of how learning could be improved. And, of course, we were dealing with only one course, limiting our assessment accordingly. Furthermore, we knew, if we really thought about it, that the feedback from expressions or head nodding were unreliable. Students, too, developed habits of behavior like my professors who reflexively asked, "Do you see?"
Responses to the Assessment Movement
Even though collegiate faculty through their actions showed strong belief in assessment - even formative assessment - the way assessment came to most faculties created resistance, or, at best, ritualistic compliance. Some faculties at some schools, e.g. Alverno College, had adopted assessment as an integral part of their instructional program and were thriving. Yet most models of assessment seemed not to adapt to larger, more diverse institutions, so many administrations tried to build assessment from the top down, or, bottom up, depending on how you view the hierarchy in higher education institutions. Some created, for goodness sakes, vice presidents for assessment, giving it the same status as fund-raising, computing technology, and fiscal affairs. This added fuel to the faculty belief that assessment belonged to others, and that it was an unnecessary waste of resources.
The assessment movement swept aside this faculty reluctance, and assessment programs were mandated by governing boards, legislatures, and accrediting agencies. The American Association of Higher Education (AAHE) began holding annual Assessment Forums. I attended several of those in the early 1990s to try to learn about assessment. I had been appointed Chair of the Subcommittee on Assessment of the Committee on the Undergraduate Program in Mathematics of the Mathematical Association of America (MAA), and we were charged to advise MAA on assessment. Eventually, we did write guidelines (see ) for mathematics departments to follow in setting up an assessment cycle for the purpose of program improvements, and hence more student learning. We explained how one should set learning goals, devise and implement instructional strategies, measure learning, and then start all over again, using what had been learned from the experience of previous cycles. We were getting closer to the true meaning of assessment, but we were not there yet. Our assessment cycles were still described as add-ons to instructional programs.
My AAHE Forum Experiences
My experience at the AAHE Assessment Forums helped greatly with my understanding of assessment. Some of the presentations amazed me - among the most amazing were the ones giving curricula on assessment in higher education graduate programs. I saw little involvement by the disciplinary faculties. What I saw was a huge cottage industry on assessment being formed and thriving external to the very core activity to which it was presumably directed, teaching and learning in colleges. I was struck by the repetition in the presentations. I was struck by my familiarity with many of the ideas in assessment programs and the techniques, too. I was struck by the use of language - words took on meanings different from how they were understood in my discipline of mathematics. The plenary speakers were inspiring, articulate, and memorable, clearly having thought deeply about something I believed I had just discovered, but also being very knowledgeable about higher education. The whole experience was perplexing, but I wasn't sure why. I had not yet mapped the assessment they were talking about onto my experience.
What Assessment Really Is
I slowly began to realize that I had met assessment before, many times, but under different rubrics. Assessment was really a part of teaching and learning. It was just probing further along the lines of my professor's "Do you see?" It was finding complex answers to that question and going further to find ways to increase understanding. It was not something foreign or external to the teaching and learning process; it was an integral part. Therefore, its name was misleading and the way of imposing it from outside the teaching and learning process was at best misguided.
Assessment is neither new nor exotic. It is and has been a part of every faculty member's work. All that is new is going beyond one class and one professor to ask that question "Do you see?" over a broader range of material and probe further to find how learning can be improved. So why do we need another word - one that conjures up visions of tax bills - to describe a part of teaching? Assessment should be done to enhance teaching, increase learning, and improve programs because it is a part of those processes. Its identification as something external to the process of teaching and leaning has greatly hindered implementing the new and productive ideas of the assessment movement. So, let's think of a better name and a better way to have disciplinary faculties claim ownership of something that is already theirs. Perhaps a name that suggests this would be helpful, such as responsive teaching.