Blog: Idea Exchange

Rethinking General Education Assessment: Aligning Tools with Methods

Brandon Combs October 25, 2017

The conversation surrounding general education, especially as it relates to accreditation, is shifting towards outcomes-based curricular structures. In response to feedback from the Higher Learning Commission (HLC), the University of Central Arkansas (UCA) redesigned the general education model from a menu-style (the “check sheet”) to an outcomes-based structure, the UCA Core. The process took place over the summer in 2013 and resulted in four outcomes: Responsible Living, Diversity, Critical Inquiry, and Effective Communication. Once in place, the question then shifted to “Now, how do we assess this?”

UCA went through a few iterations of assessment, including faculty assessing every student on every outcome every year and submitting spreadsheets full of data, to aligning rubrics in the learning management system (LMS) and assessing all students in one outcome. The first was not sustainable for fairly obvious reasons. The latter yielded the idea of rotating through outcomes.  Some of the problems encountered when using the LMS, included potential for blanket scoring, lack of calibration in scoring, and data that could not be generalized to the entire general education program.  Still struggling with the idea, the answer to the original question, “How do we assess this?” became quite simple. “Leave it to the new guy.”

When I joined the team in November of 2016, the first challenge was to present a sustainable and meaningful approach to assessment of the UCA Core. The goal was to build on the past successes while also considering faculty workload, time on task, calibration, and generalizability of data. Borrowing from my experiences with the Multi-State Collaborative and my previous institution, I presented a model to solve most, if not all, of the concerns.

The new process kept the rotation of outcomes and all of the basic tenets of what the UCA Core represents. My office collects student work (“artifacts”) from faculty teaching in the outcome being assessed, compiles the population of artifacts, develops a stratified random sample, and has a team score the student work. The team is recruited from faculty teaching and participating in the outcome, and they are remunerated for their time and effort.

From this point of the process, I am fortunate to have worked with Taskstream-Tk20 in the past. UCA was in the process of purchasing AMS, so we added Aqua to our package. UCA uses AMS to track all programmatic assessment activities, implement strategic planning documentation, enrollment management initiative tracking, and institutional value reporting. In short, Aqua is an online scoring system where student work or demonstrations can be evaluated against a rubric.

All of the student artifacts my office collected were put into Aqua, and the stratified random sample was submitted for evaluation. The team of faculty evaluators came together, underwent calibration (“normed”), and then scored student work over the course of three days. The faculty all reported enjoying the system for its ease of use. From my side, I monitored inter-rater reliability throughout the course of the evaluations to ensure generalizability of data. After the scoring was complete, the analytics engine in Aqua allowed for immediate data analysis. Because of this readily available data analysis engine, I was able complete a report on the entire process very quickly.

With the filtering options in Aqua, we will be able to slice the data by key demographics to search for trends—especially among our high attrition populations—and make curriculum improvements more quickly. The quick response time has made the UCA Core assessment more dynamic, responsive, and transparent, which furthers our commitment to shared governance and responsibility.

The benefits we’ve seen of using Aqua for scoring include:

  • Reduced workload/time on task for faculty
  • Increased participation in assessment
  • Increased faculty engagement in assessment
  • Increased participation in professional development
  • Availability of generalizable, calibrated data
  • Production of meaningful data for department/program analysis
  • Evidence of program improvement

Knowing there are many ways to approach general education assessment, and given my previous experience with various processes at multiple institutions, the process at UCA needed a tool like Aqua to continue the pursuit of continuous improvement and student success.  It will continue to be used in general education assessment to inform larger curriculum discussions that impact all undergraduate students.  Aqua will also feed the documentation of general education performance and improvements for accreditation purposes in AMS.