Shades of Assessment: Expanding Research Methodologies

Stacey Lowery Bretz, Youngstown State University

Over the last 7 years and two different universities, I have spent a significant portion of my time designing new courses in chemistry and chemistry education (9 new courses now that I stop and count!). By new, I mean one of the following: the pedagogy differed from the usual fare in a given course, new pedagogy also required adding/deleting content in a course with a well-established syllabus, or courses in which the pedagogy was the content (i.e., chemistry education courses). And in all of these cases, two important questions always arise: how will I measure what students are learning? (assessment) and how will I measure the strengths and weaknesses of this particular course? (evaluation)

As a relatively young field of scholarship, chemistry education research is still exploring the methodologies which define the field. By "define the field," I mean to indicate those measures and designs which will not only answer interesting research questions, but just as importantly, provide results which other chemists find compelling with regard to improving their own teaching and their students' learning. Given the extensive training in physical science that most researchers in chemistry education bring to their work, it is not surprising to see an experimentalist approach to studying learning and teaching -- control/experiment design, manipulation of one variable, randomization and statistics, etc. While this paradigm is comfortable for chemists and therefore persuasive with regard to the validity and reliability of the findings of such studies, what do you do when your study shows "no significant difference" between two groups, techniques, etc? A specific example from my own research highlights the importance of broadening the methodologies found comfortable by most chemists.

The modules developed by two of the NSF Systemic Initiatives in Chemistry (ChemLinks and ModularChem Consortium) are now commercially available through Wiley as ChemConnections. I have taught general chemistry with modules since their very inception, often using alpha versions, but eventually teaching two full years of general chemistry using all modules. Given the emphasis of modules on teaching chemistry in the context of real-world problems, I wanted to investigate their impact upon students' learning, specifically, their use of rote or meaningful learning strategies (per Novak and Ausubel). I chose to assess the impact on student learning in two different ways: one was a Learning Approach Questionnaire (LAQ) which measured student preference for rote or meaningful learning strategies; the second was to assess student learning on each of the three midterms and the final exam by having them construct concept maps. As I learned, these two assessments would lead me to very different conclusions.

The LAQ was administered to students at the beginning of the first course in September and at the end of the second course in April. Details of the statistical comparisons and measures can be obtained for those who might be interested, but the bottom line was this: for students who took general chemistry prior to the use of ChemConnections modules, i.e., "regular ole plug n chug general chemistry," there was a significant shift towards rote memorization from September to April. For those students who studied general chemistry through modules, we had hoped to see the opposite trend. However, what we saw was no statistically significant change. The modules might have reversed the "decline" we saw with the old curriculum, but they certainly didn't send students in the opposite direction of using more meaningful learning strategies, either. Now what? Was I to conclude that the modules were worthwhile or not? This is where the importance of a second assessment was invaluable.

In addition to making concept maps for each of the exams, students were also asked to write an essay with the final exam reflecting upon their experience with concept mapping and its value (if any) in learning chemistry. Their essays were analyzed using constant comparative analysis, with three definite positions emerging:

  • busywork - 23% of the students found the maps to be of no value whatsoever, other than to accumulate points because the professor assigned them
  • study tool - 44% of the students found the maps to be a useful assignment in that it prepared them for studying, identified areas they needed to "brush up on"
  • meaningful learning - 33% of the students identified concept mapping as a learning tool which let them organize and understand concepts without having to memorize what the teacher or the book said

Can I still conclude that modules provide no statistically significant improvement in student learning? No. Using an assessment which collected qualitative data provided information which would have gone undetected with just the numerical measure. Can I make a claim as to whether the modules or the concept maps or the combination of them was responsible for these results? No. But then again, I wasn't interested in answering a cause/effect question to begin with. And this point leads me to the current focus on my journey in assessment: teaching other chemists about less familiar assessments and how to conduct research using them.

Chemistry education research as a field is working hard to incorporate the methods of anthropology and sociology: from participant observation and interpretivism to ethnography and action research. There are important and intellectually interesting questions bettered answered by other than experimental designs. One of my current professional goals is to educate fellow chemists that these methods are worth learning about, worth using, and will provide results worth trusting to improve their teaching and their students' learning.

This emphasis on expanding and informing the methodologies of my field has led to the development of an M.S. degree program in Chemistry Education here at YSU. Traditionally, chemists do not currently learn about these research methods in their graduate training. And as high school teachers now pursue the M.S. for professional development and state licensure, these methods in particular can empower them to continually monitor and improve both their teaching and their students learning. With the benefit of funding from FIPSE, we hope to assess the learning of these teachers as students in the M.S. program and share our findings with others interested in working with teachers in their own region.