CLICK HERE to download a pdf of the following report.
Eight instructional programs and one discipline completed the review process this past year. All have submitted their final reports to the Program Review Sub-Committee, which has, in turned, reviewed each of the final products (using a rubric). A final report, based on those comments, is forthcoming.
There were a number of principal findings and suggestions that emerged from the debriefing session. Regarding data, it appears that thre may have been a bottle neck when it came to data requests; i.e. when everyone asks for data at once, it may be overwhelming for Jennifer. One suggestion was to give each of the review teams a window during which they could work with Jennifer on their data needs. Another suggestion is to begin requesting data in May or June, so that she may have more time to generate a report, and the team has more time to analyze.
In terms of the process, some suggestions were to begin the process in the spring. Another suggestion was that the manual was not as useful as it could be. The sample was not aligned with what was required, and section descriptions were sometimes vague. There are plenty of samples now available in the repository, but the manual itself could have more elaboration, with relevant samples in each section. The group also agreed that for the final presentation, it should be the CAC who makes it. Curriculum Committee liaisons felt uncomfortable about fielding questions for a program that was not theirs. Finally, the group felt that it would be useful to forgo the August orientation for more one-on-one/small group discussions with the leadership of each review team. One of the outcomes of that meeting would be the selection, with program input, of the full team composition.
The longest discussion centered on the review team and roles and responsibilities. Perhaps the number one suggestion was to revise the way that teams are chosen. It was suggested (and echoed) that the program under review play a greater role in selecting review team members. This would be based on the needs of the program in terms of the review. There was general consensus that the curriculum committee liaisons had difficulty grasping the process, until about the half-way point. One suggestion was to have the PRASC serve as the external readers, providing feedback throughout the year, as sections are submitted. Another suggestion was for a true external stakeholder to be a review team member, thus providing a state-of-the-industry perspective.
Main points from debriefing
• Waiting for data could be frustrating
• Not knowing what data to ask for could be frustrating; part of this was the nature of the research…sometimes it wasn’t until late that you realized what data you needed
• Provide review teams with a list of available reports, so that they know what’s available, without having to go fishing for it on the L drive
• Start process in summer – gives you time to start pulling together data so that you have information to go on in the fall;
• Some data requests went unfulfilled, and team had to shift the way it responded to certain items
• Inconsistency in sources of data (What Susan provided some times did not agree with what Jennifer provided).
• Suggestion: Stagger start for data requests…a window of opportunity for each program
• There should be no exceptions for faculty and staff in fulfilling the responsibility of being on a team. I was excused last year, and was in the dark this year.
• The model/sample in the manual is not aligned with what is requested
• What to provide up front for illustration: Not necessarily a template, but perhaps samples that fit broad categories (e.g. sciences, humanities, etc…) [of course, there are now 16 samples in the repository to select from].
• A template for formatting…not fill in the blanks, but one that has the basic formatting (e.g. fonts, headings, etc…) in place [ask Mary to help with this?]
• [working with Patrick to have some crossover between Phase IV pioneers and review team members…goal would be to have one or two reviews (including process) that are completely online]
• Final presentations: to encourage more dialogue, make sure the curriculum committee understands their role for this process; Or, maybe presentations in a different venue
• CAC should be making the presentation – they can field questions a lot easier than the review chair
Roles & Responsibilities (the team)
• I was confused the whole time…CAC had been involved in a review the year before, “so she did all the writing and data collection and everything…I was just kind of doing a little editing…even though I was the chair of the committee, in name, I felt like I was just following.”
• It wasn’t until half-way through the process that our chair (i.e. curriculum committee liaison) realized what their role was to be.
• [as the curriculum committee liaison] “I have a real hard problem coming up with questions for a program that I know nothing about”
• I don’t think those who were not from the program area had a clear understanding of their role. I think it was confusing for them to be the chair of a committee that’s not their program…it was confusing to them as to what they should be doing…they wanted to help, but didn’t know how. And, in frustration, the program people end up taking the whole thing and writing it themselves.
• I had an SER person who provided data. Otherwise, I wrote the entire thing.
• The SER person and the library person probably knew what they were doing, but for the others, the contribution was, “Oh yeah, that sounds good.”
• A lot of times, they are not in a position to know what to write…how to approach it…what questions to ask.
• People are busy as it is…to ask them to come in from other disciplines was busy work for them, and not useful for those who had to sit down and write it anyway.
• Have a true “outside” reader…someone from the community and the same profession who can bring educated objectivity…they know the lingo, they understand the needs of the industry, they know what’s cutting edge
• Could the faculty from the program under review, choose the team…the body of people who would be of greatest use to them
• Have the program review sub committee review the product – in part, and finally, in whole – throughout the year, using a rubric that is the same, or similar to, the one the PRASC is using already. They could see trends, make suggestions (make clarifications, etc… to strengthen products throughout the year…as opposed to the end.
• The first meeting could be between Don (or the person arranging teams) and the review chair and the program chair… and the three of you brainstorm who you would need. What approach does that program need to get the information that is needed.
Working toward a useful product
• The three would also work on shaping the review (within the broad parameters).
• A discussion could take place about the program…strengths, weaknesses, questions….which would provide an understanding of the process and the bigger picture (the outcome).
• Someone who could help, at least in this initial meeting, would be someone who has just gone through the process…someone who’s already been down the path.
• The [orientation] was not effective…the focus was on who was on what team…”what’s our title?” etc….
• The best feedback was when we sat down with you [Don] and we had focused discussions
• The orientation, with the powerpoint, doesn’t make sense…there’s no context for us yet.
• Feedback, other than from the group, was not as consistent as it could be. I submitted everything, as prescribed, but I was not always sure it was what was needed.
• The manual should suggest the flexibility inherent in the process
• More leading questions and examples built into the manual…guide me a little bit more…particularly for the analysis sections…there needs to be more clarity about the difference between the sections preceding the Analysis, and the analysis itself.