Since our launch at the end of last year, we offered Aqua pre-loaded with the AAC&U VALUE rubrics and offered an import service for any institutions who wanted to use their own local outcomes and rubrics. All along, we were progressing towards offering users control over building out their own rubrics, but we didn’t rush it, as we wanted to make that part of the product as easy as it could be. Making things simple, can, ironically, be the hardest work we do!
With the most recent Aqua release, we have introduced a “Learning Outcomes and Rubric Library” that provides assessment coordinators with the ability to create, edit and manage their outcomes and rubrics in a simple library. You may call them learning goals, competencies or outcomes, but the model is simple and allows institutions to get started as quickly as possible:
After defining outcomes, which are grouped in sets and can be associated with different levels of the institution’s hierarchy, assessment coordinators can easily create a rubric to be used for measurement.
We have more enhancements and options coming to our Outcome and Rubric Library coming up later this summer, but this first release has already surfaced some of the power that was always sitting under the surface of Aqua.
Institutions can now easily create outcomes and associated rubrics at different levels of the organization and use them across projects to assess student work.
The Outcome and Rubric Library is accessible through a new global navigation option, “My Organization.” This new menu is also where assessment coordinators will find the information to turn on the ability for students to send work from your school’s learning management system directly to Aqua for scoring.
Selecting “Enable LMS Submissions Using LTI” provides top-level assessment coordinators with a self-service tool to set up this integration with a LMS like Canvas or Moodle.
Outcome performance reports to go
Easily one of the most popular features in Aqua is the outcome performance reporting, which we designed as a powerful, but simple, way to engage with outcome performance data. The visualizations show aggregate mean performance and distributions across criteria and outcomes and the reporting interface responds to changes in demographic filters in a snap. Translating an interface like that to a printed layout is not as simple as printing the screen. The PDF includes the visualizations, but also formats all of the filters and score distribution counts into tables that can easily be read and referenced offline.
Any user with access to performance reports (under “View results”) can now generate a PDF of the report with a single click. The static report the user generates responds to any currently applied filters, such as a drill-down to a specific criteria, or a filter by course, assignment or student demographics. This makes it easier to print or electronically distribute the report, or attach it as evidence in your assessment plan reports in AMS!
We have also added a second .csv format for exporting the raw outcome performance data from a project. The format we have supported since earlier this year puts the score for each student and outcome/criteria on its own row. This format provides for the most scalable and flexible way to move data out of Aqua into a local database (because the column headings are always the same). The new, alternate format looks more like a gradebook row, with each score for a student in its own column. This is both easier to scan and read, but also makes it easier to move the data directly into a statistics package like SPSS with each criteria score as a variable.
Better tracking of multiple scores
Aqua has always been optimized to support the random distribution of student work samples to evaluators for scoring. Within each project are a variety of settings to help match the scoring rules to an institution’s specific practices:
- Assessment coordinators can anonymize the submissions, hiding the name, course and assignment information;
- They can choose “unbiased scoring” to ensure student submissions are routed to evaluators who did not teach the originating course;
- Each student submission can be scored up to four times to generate data to calculate inter-rater reliability.
Before this release, when an assessment coordinator wanted 2 or more scores on their submissions, progress towards completion was somewhat opaque. An evaluation requiring two (or three, or four) scores was not considered complete and would not show up in any progress reports until the student submission got both scores. With this release, we have surfaced those first, second and beyond scores as “rounds” tracked independently so institutions can better gauge progress towards their goals.
This foreshadows more work coming soon that will give institutions options for controlling and customizing the scoring rounds, like the ability for assessment coordinators to start and stop a round to better control progress towards scoring goals (and to respond to shifting resources), as well as the option to identify samples of artifacts for additional scores or bring in different sets of evaluators.