General Education Outcomes Assessment
The purpose of this report is twofold: 1) to provide an overview of the process undertaken to assess the Written Communication skills of the students at Carteret Community College; 2) to provide Results, Analysis, and Use of Results based on the data collected during this process. Following a summary of the process that was implemented in order to carry out the assessment, aggregated (College-leve) results will be presented, along with a Use of Results and Action Plan. In the Administrative Report, Program-level results will be provided (i.e. these results will not be viewed by anyone but relevant administrators and program CACs).
In the 2009-2010 academic year, it was determined that the College would alter its approach to assessing General Education Outcomes (Institutional Level Learning Outcomes – ILLOs). Prior to that point, the seven ILLOs had been assessed in specific, relevant courses (e.g. Written Communication assessed in ENG 111, Computer Literacy assessed in CIS 110). The revised process would assess ILLOs at the program level. That is, each program would be required to identify an assessment that would be administered in a relevant course (i.e. late-stage or capstone; not introductory). Because of the non-linear curriculum in AA and AS, assessments would be administerd in courses with highest enrollments (i.e. AA: ENG 111; AS: BIO 110).
In Spring 2010, a timeline was identified for assessing the ILLOs. Because of the complexity of shifting this assessment process to the program level, it was decided that for 2010-2011, only one General Education Outcome (Written Communication) would be assessed. Here is the 2010-2017 timeline:
In Spring 2010, the General Education Sub-Committee devised an implementation plan for the Written Communication assessment in the 2010-2011 academic year. The plan included:
• A communications plan for all programs to understand the process
• A training plan for all readers to effectively use the rubric
• A timeline for identification of assessments (by CACs)
• A timeline for collection and scoring of assessments
• Identification and training of scoring teams – for use of rubric
In April, 2011, Written assessments were collected from all CACs (except Practical Nursing; the assessment will be conducted in the summer term). The initial plan was to collect a random sampling of papers from each program. However, due to the number of papers collected overall (N=279), it was decided that all submitted papers would be read and scored.
Throughout May, final preparations were made:
• Indentification of 10 two-person scoring teams. Each team had at least one member from the academic division represented by the papers (e.g. A Nursing faculty member was on the team that scored Allied Health papers);
• Scoring sets were grouped according to relative similarities between programs;
• Sets organized with relative equity among readers regarding number of pages (not papers) to be read.
May 25, 2011 was designated as Scoring Day. Scoring teams gathered in one location to receive their sets of papers. CACs explained their assignments to the relevant team, and clarified any inquiries. Readers were allowed to leave with their papers; 75% of the sets were completed by noon; 100% within 24 hours.
Scores were entered in a master spreadsheet (see example, below). If scores given by the two readers on any of the four sub-components (Context, Content, Genre, Mechanics) had more than a one point differential, then a third reader scored those papers. Out of the 279 papers, 50 required a third reader (some papers had +1 differential on more than one sub-component).
Once scores had been aggregated, averaged, and disaggregated by program, College-level results were shared with the General Education Committee on June 28, 2011. The following pages include data on College-level results as well as a preliminary Use of Results.
Preliminary Use of Results
Upon initial review of the data, here is a potential list of Uses of Results:
• Scoring: Only one program per reader. CACs become readers (of different, but closely-related program?)
• Evaluate program-level writing. Which programs emphasize writing already? To what degree? What writing instruction do their students receive?
• Offer Writing Workshops. Via Academic Support. Free to students. Once per semester.
• Offer Writing Workshops. To instructors. How to effectively emphasize, e.g. Mechanics.
• Re-Assess Papers. Once a focal point for improvement has been determined (e.g. Mechanics), re-evaluate (a random selection of?) papers for refined definition of Mechanics, and a more specific list of weaknesses (pick the top-10?). Devise workshops for students and faculty based on this analysis.
• A more concerted effort is needed to generate broader representation of DL courses (HY & IN).
Any feedback on this report is greatly appreciated.
Feel free to send it to:
Don Staub, Director Title III