In 2010-2011, we shifted our Gen Ed Assessment program from one that gathered data in a few, select courses to a broader, program-level approach. In the past, Gen Ed (a.k.a. ILLO – institutional level learning outcomes) data had been gathered via relevant courses that had the greatest number of students passing through them. For instance, we collected assessment data for our Computer Literacy only is CIS 110. Likewise, Written Communication in ENG 111, and so on.
Of course, this approach had flaws. Quite often, these courses were taken by students who were early (not late) in their community college experience (which is counter-intuitive to the essence of ILLO assessment; i.e. measuring what students have learned in their time at the College). In addition, it was not always a representative sample (e.g. not all programs required their students to take ACA 115 – testing ground for Personal Growth and Responsibility).
So, in 2010-2011, we shifted our approach to the program-level in order to address both of these shortcomings. We began by determining that 10-11 would be a pilot year for this process, and asked programs to focus on assessing only Written Communication. One of the essential pieces of this process (or so we believe) was to develop a common rubric that would be used to score all written samples collected for this assessment. Once the rubric was agreed upon, we spent the Fall semester (2010) conducting training sessions for readers in how to use it. In the Spring (May, 2011), we had a scoring day and readers used the rubric to score papers in sets that they had been assigned. Click here to view a more detailed look at the process.
Our Gen Ed Outcomes Assessment Committee decided that we would begin with the VALUE (valid assessment of learning in undergraduate education) rubrics that had been developed under AACU’s LEAP project. The Gen Ed committee felt that the Written Communication rubric needed tweaking for our own purposes, so it resulted in this rendition that we have used locally:
Discussion: Did it work? Did we improve on the VALUE written communication rubric? Was it a flawless process?
Theoretically, the rubric should work for just about any written piece. And, for the most part, it did. Where it got a little tricky was in how we divvied up the readings. We tried to ensure that readers were scoring similar sets of papers. This team read Radiography and Respiratory Therapy papers, where that team read Boat Manufacturing and Cosmetology papers. However, this did not always work out swimmingly. So, in some instances, readers were scoring papers from programs that required more academic/rigorous writing skills (e.g. Paralegal) alongside papers from a program such as Basic Law Enforcement Training (BLET). A paper from Paralegal may warrant a 4 (out of 4) for Content, and so might a paper from BLET. However, in the reader’s mind, these are still two distinct displays of writing ability. Our proposed solution is, next time we use a common rubric, to have readers only score papers from one program.