Assessment and General Education at JMU
Donna L. Sundre, James Madison University
In recent years, accountability has become an increasingly common theme in discussions of higher education, both within the academic community and in relation to a larger public of parents, citizens, and legislators. Partly in response to these concerns but also in response to new methods of instructional delivery, program assessment has become increasingly important at James Madison and many other institutions in the Commonwealth and all over the country. The viewpoint of the General Education Program is that if we can articulate what it is we believe students ought to learn in our program, then we should be able to design research methods that give us reasonable indications of whether or not they are. We would not argue that any single evaluation tool, including assessment testing, can tell the whole story, but we do believe that well-designed assessment can give us important insights into what students are learning across our program.In general, assessment techniques are used on college campuses for three distinct, but interrelated purposes:
- Measurement of individual student achievement or competency
- Measurement of program impact on groups of students
- Evaluation of the quality and effectiveness of curriculum
Within our program, we make regular use of all three functions. In Cluster One, the Information-Seeking Skills Test and the Tech Level 1 tests are designed to measure individual student proficiency in these areas. These are competency requirements that all students must pass in order to complete Cluster One. Other areas of the program have occasionally expressed an interest in moving to a competency foundation-particularly in areas such as writing achievement and mathematical achievement. For the most part, James Madison and the General Education Program have moved in the direction of establishing competency measures in skill areas rather than knowledge areas, though clearly knowledge and skills are both needed to demonstrate ability in any of these areas.
In all five clusters, there are assessment methods that attempt to measure the impact of our program on students. These are mostly multiple-choice tests, but also include some other approaches such as the writing portfolios we use in Cluster One. Many of the multiple-choice tests require analysis of problems or reading passages rather than repetition of factual information. Usually students take these assessments tests first as pre-tests in August of their freshman year and then again as post-tests on Assessment Day in their sophomore year. We attempt to discover whether or not there are correlations between the performance of groups of students and the courses the have taken. In general, we look for four indicators of impact:
- Change over time-students who have taken coursework in a particular cluster should exhibit greater change over time than students who have not had coursework in the cluster,
- Comparison-students who have completed a package should perform better than students who taken only one course,
- Meeting a standard-- a substantial number of students who have completed a package in the cluster should meet a standard established by the faculty in the cluster, and
- Relationship-a moderate positive correlation should exist between course grades in the cluster and assessment scores
One of the most useful functions of assessment is its ability to provide diagnostic information about student learning as well as the success of our programs. As we develop pre-tests for each cluster, we can use the results to discover what our students know and are able to do when they arrive here. Post-tests can help us determine places where our programs need improvement. For example, if groups of students who generally do well on a test do poorly in one area, it may be that our curriculum is not really addressing that area. In another example, the faculty in the Writing Program have found that the reading of portfolios from freshman writing classes has given them valuable insight into issues of consistency and common standards. Through such conversations, our assessment program contributes to the overall evaluation of the effectiveness of the curriculum in some areas.
Some people have expressed a concern that the tendency will be to "teach to the test." We hope that our policies minimize this concern. Except in certain clearly circumscribed areas, our testing programs primarily address program impact and effectiveness rather than individual student performance. Both our program and our assessment methods are founded in learning objectives, which allows for core consistencies in the general message and content in a particular area, but also engages the individual expertise, enthusiasm and creativity of each faculty member. Agreement on outcomes provides substantial room for diversity of methods for achieving them. Agreement on core objectives allows us to celebrate the multiple paths by which they may be accomplished.
For many years now, James Madison's assessment program has been lauded as a model for the state. More recently, our General Education Program has begun receiving similar accolades. Both Programs will remain strong with the continued development of strong and reliable assessment methods that are closely articulated with our curricular goals.