Using Assessment to Strengthen Student Learning

Bridget L. Gourley, DePauw University

As I look at the application guidelines I find it difficult to pick just one of the four plenary topics to focus my application toward. I believe I bring some experience in most of the topics and a strong desire to gain knowledge related to all of the topics. I have worked on classroom assessment in several courses from simply strategies like clearest point, muddiest point cards to a more comprehensive approach for an entire semester's course. Our department developed assessments for students we serve at the 100-level as well as for majors as part of the University's overall accreditation. We implemented a common lab practical in our introductory core and set standards for the percentage of students we expected to reach certain goals. For majors we developed an approach to our senior comprehensive exam. We are particularly pleased with the comprehensive exam's benefit for the students. As a result of our self-study we have totally revised our entire curriculum and are looking at on going assessment of those changes. Because of our new curriculum, I took advantage of our Faculty Fellowship Program, applied and received one of our competitive 3-year fellowships. The title of my project is, "Developing a Physical Chemistry Curricula for the Non-physical Chemist and Assessing the Effectiveness of a Comprehensive Curricular Reform." The Fellowship gives me four hours (out of 24 hours) course release each year (beginning with this academic year) to facilitate work on the project. My project is split between developing the new physical and theoretical courses for the curriculum as well as assessing the overall success of our new curriculum. My first course release is this coming spring semester, which begins at the end of the month. As a Department, we just finished our first semester of new courses. The timing of the assessment roundtable is perfect and consequently, I am very interested in attending.

As I try to refine my ideas for this essay and think about the titles of the plenary sessions I realize I am most drawn to the topic, using assessment to strengthen student learning which of course can't be uncoupled from looking at the impact of assessment on faculty and students. I think my immediate draw to these two topics is because what I most enjoy about my job is helping students master new material and skills. Having been a faculty member for almost 14 years I realize the importance of the other topics, building the confidence of faculty, addressing the role and impact of assessment practices at the institutional level. Also, I realize the need to address the impact on faculty and students in ways different from the impact due to improved learning, e.g., perhaps not taking assessments seriously because they are "over" assessed. I recognize that in order to utilize assessment effectively to strengthen student learning we have to look at these other issues. Finally, one measure of student learning is an effective curriculum. As a scientist, I need data to show the curriculum is effective and assessment is a way to acquire the necessary data. Yet, curricular or programmatic assessment is often seen as a must do to satisfy accreditation or funding agencies, and is often not linked, at least in the mind of some faculty, to assessing student learning.

Giving a little more information about my experience with assessment. Like many faculty members interested in their teaching, I have attended teaching workshops and assessment is always a component of those workshops. Presenters always address how do you know whether or not a change you have made is effective. I've been thinking about assessment since I began teaching. I have taken a more aggressive and creative approach to assessment since getting tenure because I have felt more freedom to try new things because I could afford for some of those experiments not to work. In fact, I think I learn more from those approaches that aren't effective. Often they aren't necessarily bad, however, the overall implementation was not appropriate. Sometimes I can't realize full impact until I see it evolve.

Let's look at a couple of specific examples. In my physical chemistry course, I assigned homework problems because I knew that based on their complexity and the time the problems took to complete the majority of students would not do them unless it was required. I know that the key success in most science disciplines, but particularly physical chemistry, is being able to apply the theories to problems. So, initially in my teaching I assigned and graded problems. Students typically submitted the mathematical steps required to determine the answer to the question. If I was lucky these steps were written on the page in a linear sequence, easy to follow. Like I had been taught, I rewarded my students for the correct mathematics and answer with a good score. Yet, I was frustrated that they seemed to have almost no recollection of problems they had worked and little sense of whether in a particular context a number was big or small, reasonable, etc. Hence, to my way of thinking they hadn't really mastered, e.g. learned, physical chemistry. So, I developed new guidelines for homework problems. I now require students to state the initial problem and describe their approach to the solution in words before beginning the mathematics. Mathematical steps must be explained. Equations utilized in solving the problem must be justified, both as to where the equation comes from and why it applies in this particular situation. When students get to a final result they are required to put the final result in context, again in words. As a result of these new guidelines I am now assessing whether or not students understand the material but more importantly I am forcing the students to understand why, which is really what their learning is about. Knowing that this was different from the way students were used to being assessed, I wanted to give them every opportunity to succeed, consequently, when I first made the switch to this style for homework, I gave two due dates for every homework set. Students turned the work in, got feedback on their write up and hints if they hadn't successfully completed the problem, and then a second opportunity to earn points that had been deducted the first time. From a workload point of view this was very hard on both the students and myself. I've now adjusted that scheme, they may resubmit the first three of seven homework sets giving them a chance to learn what is expected but not forcing them to do everything twice. Also, the resubmits on the first three sets can come in anytime before Thanksgiving break allowing them to find a lull in their workload to go back and address any deficiencies. Homework is still a great deal of work, but the pay off is significantly better. Now I almost never have a student get lost at some point in the semester unable to hang on to the material. The quality of the questions students ask in class is much higher. Also, by reading their explanations I have a much clearer picture of where student understanding is on any particular topic. Some individuals might not see this as an assessment strategy but to me it is very much such a strategy. I learn what my students know. By forcing my students to describe in words why they are using a particular equation and strategy they make their understanding more concrete. And students are rewarded for their efforts. Homework grades are quite high and exam performance has improved.

Now for a simple example, many are familiar with Classroom Assessment Techniques, or CATs, see for example Angelo and Cross1. One of my favorites is the clearest point, muddiest point. At the beginning of class I pass around two different colored index cards. At the end of class I stop a few minutes early and ask students to put the clearest point of the day on one card and the muddiest (or most confusing thing from the day) on the other card and drop them on the front desk on their way out. I then scan these cards before the next class and use the cards as a launching point. I start by summarizing what they indicated was clear and acknowledging the value of that understanding. Then I address things that were muddy, I often read a specific question and then respond to it. I find that often I use close to half of the next class addressing and building on these muddy points. Students really appreciate that they feel we don't go on until they've come to terms with current material. They can be anonymous in admitting confusion and they usually learn they aren't the only person who didn't follow a particular calculation the first time. I learn what they did and didn't understand and where the misconceptions are. Some might worry that there isn't enough coverage of new material. They need not be concerned. I find that with careful planning I can, as we address the loose ends from the previous class, also begin to present at the next material. Perhaps I lose 5-10% of new material, but I probably gain a 50% comprehension increase so student learning is significantly enhanced overall. In courses that are very vertical in nature I may use these cards on 75+% of the class days. I find by the way I respond to the muddiest point comments, over the course of the semester, students often ask really good questions on the muddiest point cards that push their understanding to a new level.

Because of the successes in my teaching with these assessments I am always looking at other was to assess my students learning so I can make on the spot changes to the course. That way before the end of the semester I know students are going to walk away with increased mastery of the material and skills over where they entered the course. As I think about the requirements for my courses and workload I continually look to ways to reward their participation in the assessment by making it worth their while, sometimes in tangible ways, e.g., the reward structure in the grading of homework, and other times in less measurable but equally satisfying ways for students, e.g., feeling that I immediately address things they don't understand. I am very sensitive to the fact that good assessment can't be an add-on to a course. It must be an integral component to the way the course is taught. I've learned that lesson by experience in trying different assessment techniques.

Let me now turn to a few comments about our departmental assessments related to accreditation because I think in addition to addressing the impact on students and faculty it also addresses my experience with faculty understanding and confidence with assessment and institutionalizing assessment. As part of the University's last accreditation by the North Central Association each department was required to develop two assessment strategies, one early in the curriculum and one for majors. Our plan was shared with many other departments who struggled to come up with goals that could have specific measurable outcomes. Also, to help departments see assessments that weren't overwhelming to administer. We decided that at the introductory level we had specific goals with regard to laboratory skills and so we developed a lab practical and set standards that we felt a significant percentage of the students should be able to meet after two full semesters of laboratory experience. What we found over a 6+ year period was that a measurably fewer of the students met the standard than we expected and there was no clear pattern as to who met standard and who didn't, e.g., male vs. female, students with AP credit vs. not, students in sections of particular faculty, etc. It helped us to question whether we were really achieving our goals or even could we achieve our goals with the existing curriculum. It also helped us let go of some previously entrenched ideas that limited our creativity in developing a new curriculum.

Our other accreditation assessment is the senior comprehensive exam. All majors must have a senior comprehensive requirement. Traditionally, our students took the Chemistry GRE and were required to achieve a minimum percentile. We felt we wanted to keep an exam but that this historic approach didn't serve the Department, faculty or students, effectively, so we developed a new strategy. We pick a journal article out of the current literature that we can write questions about from all sub-disciplines of chemistry, we give the students two weeks to read the paper, study it, and come ask any of the faculty as many questions as they like. On exam day they are given a clean copy of the paper with the exam. Each faculty member contributes questions about the paper, we take turns putting the exam together and all edit the exam for balance and length. In grading exams each question is graded by two different faculty members on a 1-6 scale, students must get a 4.5 average to pass. Recognizing that this would be a new experience that would take adjusting for some students we set our expectations that only 50% of the students would pass on the first try and that there would be four scheduled opportunities each year. All seniors would begin with the first exam in the fall and continue until they passed. Certainly, this assessment is work, but we value the experience for our students enough that we have continued it over probably 8+ years and plan to continue to do so. It is very enlightening for students to see how they need to bring knowledge from what they have often seen as disparate courses together to understand current chemical problems, even ones that seem focused on a specific sub-discipline. During the process of studying for the exam students realize how far they have come during their four years and we get to reinforce a skill (accessing and understanding the literature) that can be hard to work into core courses in significant quantities. Clearly our departmental faculty members value our approach to the comprehensive, certainly the students see it as a hurdle, but one the students gain from rather than simply jump. Because of our success with this strategy we have come to value it as an important assessment component. Since a senior comprehensive is required in all majors it is not seen as an add-on but an integral component of our institution.

Most of my essay has focused on assessment to improve student learning and how that impacts faculty and students. Let me close by turning to a harder question, that of programmatic assessment. One of my new challenges is to spearhead development of the assessment methods for our new curriculum. We set very clear goals for ourselves about what we wanted to achieve with this curriculum. Like all good programmatic goals they are broad in their description. We developed a curriculum that we think address those broad goals. Now we need to turn our goals into outcome statements that can be measured. One of my challenges is finding ways to do this without impinging on an individual faculty member's domain. This can often be the most critical issue to deal with in doing a departmental or institutional level programmatic assessment. A faculty member needs to be free to develop the structure of their the course in a way that best meets their vision of the programmatic goals for the course and in a way that plays to their strengths as a teacher. The Department needs to develop programmatic assessment that doesn't overburden an individual faculty member or students, otherwise the assessment will be seen as an add-on and not an integral part of improving student learning. Furthermore, in the case of a new curriculum part of the assessment needs to be focused on a comparison with the previous curriculum. The comparison must rely on past data that is available for other reasons, because we can't go back in time. These programmatic assessment challenges are where I am just beginning to develop a new expertise, for my Department and myself.

1 Classroom Assessment Techniques, A Handbook for College Teachers, Second Edition, Thomas A. Angelo, Patricia Cross, Jossey-Bass Publishers, 1993 (ISBN 1-55542-500-3).