Carteret Community College Title III Grant

August 10, 2011

Outcomes Assessment at CCC – 2010-2012

Filed under: Outcomes Assessment,Presentations — Donald Staub @ 4:36 pm

Click on image to download ppt

This is the presentation I made on Thursday, August 11, at the all-faculty meeting to kick off the new academic year. In the presentation, I covered a lot of ground rather quickly. The main points are:

  • Assessment Fellows: These three individuals (Sherry Faithful – A&S, Philip Morris – A&S, and Shana Olmstead – Business Technologies) have received stipends for the Fall semester and were given Title III funds to travel to the Institutional Effectiveness Institute in Baltimore in July. They will be serving as faculty leads on expanding, and deepening, the outcomes assessment process at Carteret.
  • ILLOs – 2010-2011. This is a quick and dirty review of what has been given three times already in various venues across campus. To learn more about this exhilarating topic, click here.
  • ILLOs – 2011-2012.  What are we doing this year?  We’ll be assessing Computer Literacy and Information Literacy.  The fall will be spent identifying program-level courses in which assessments will take place (the big question: will Computer Lit be assessed at the program-level, or across-the-board…stay tuned).  The spring will be dedicated to gathering assessments, scoring, if necessary, and identifying Use of Results.
  • Program Outcomes – 2011-2012. This will be a quick update on where we are for all programs and their analysis of data regarding Success, Persistence, and Retention data.  Currently, Jennifer (IE) is providing data to each of the programs regarding their data requests.  On October 10 (CCC in-service day), at the Assessment Workshop, Programs will identify Use of Results and Action Plans for their program outcomes.
  • CLLOs – 2011-2012.  We are moving toward full implementation of this assessment plan.  Last fall (2010), only those full online sections with a seated complement were asked to collect and analyze CLLO data.  In the spring (2011), it was any DL course (online or hybrid), plus any corresponding seated sections.  This fall, ALL courses must assess two CLLOs for each course taught.
  • PLLOs – 2011-2012.  Keep on, keepin’ on.  We are asking faculty to continue assessing and using data to make instructional improvements.
Advertisements

June 27, 2011

General Education Assessment – 2011

Filed under: Outcomes Assessment — Donald Staub @ 8:47 am

click on image to download pdf of draft report

General Education Outcomes Assessment

Draft Report:

Written Communication

2010-2011

June, 2011

 

 

Introduction

The purpose of this report is twofold: 1) to provide an overview of the process undertaken to assess the Written Communication skills of the students at Carteret Community College; 2) to provide Results, Analysis, and Use of Results based on the data collected during this process.  Following a summary of the process that was implemented in order to carry out the assessment, aggregated (College-leve) results will be presented, along with a Use of Results and Action Plan.  In the Administrative Report, Program-level results will be provided (i.e. these results will not be viewed by anyone but relevant administrators and program CACs).

The Process

In the 2009-2010 academic year, it was determined that the College would alter its approach to assessing General Education Outcomes (Institutional Level Learning Outcomes – ILLOs).  Prior to that point, the seven ILLOs had been assessed in specific, relevant courses (e.g. Written Communication assessed in ENG 111, Computer Literacy assessed in CIS 110).  The revised process would assess ILLOs at the program level.  That is, each program would be required to identify an assessment that would be administered in a relevant course (i.e. late-stage or capstone; not introductory).  Because of the non-linear curriculum in AA and AS, assessments would be administerd in courses with highest enrollments (i.e. AA: ENG 111; AS: BIO 110).

In Spring 2010, a timeline was identified for assessing the ILLOs.  Because of the complexity of shifting this assessment process to the program level, it was decided that for 2010-2011, only one General Education Outcome (Written Communication) would be assessed.  Here is the 2010-2017 timeline:

click on image for larger view

In Spring 2010, the General Education Sub-Committee devised an implementation plan for the Written Communication assessment in the 2010-2011 academic year.  The plan included:

• A communications plan for all programs to understand the process

• Development of a common rubric

• A training plan for all readers to effectively use the rubric

• A timeline for identification of assessments (by CACs)

• A timeline for collection and scoring of assessments

• Identification and training of scoring teams – for use of rubric

In April, 2011, Written assessments were collected from all CACs (except Practical Nursing; the assessment will be conducted in the summer term).  The initial plan was to collect a random sampling of papers from each program. However, due to the number of papers collected overall (N=279), it was decided that all submitted papers would be read and scored.

Throughout May, final preparations were made:

• Indentification of 10 two-person scoring teams. Each team had at least one member from the academic division represented by the papers (e.g. A Nursing faculty member was on the team that scored Allied Health papers);

• Scoring sets were grouped according to relative similarities between programs;

• Sets organized with relative equity among readers regarding number of pages (not papers) to be read.

May 25, 2011 was designated as Scoring Day.  Scoring teams gathered in one location to receive their sets of papers.  CACs explained their assignments to the relevant team, and clarified any inquiries. Readers were allowed to leave with their papers; 75% of the sets were completed by noon; 100% within 24 hours.

Scores were entered in a master spreadsheet (see example, below).  If scores given by the two readers on any of the four sub-components (Context, Content, Genre, Mechanics) had more than a one point differential, then a third reader scored those papers.  Out of the 279 papers, 50 required a third reader (some papers had +1 differential on more than one sub-component).

Once scores had been aggregated, averaged, and disaggregated by program, College-level results were shared with the General Education Committee on June 28, 2011.  The following pages include data on College-level results as well as a preliminary Use of Results.

Preliminary Use of Results

Upon initial review of the data, here is a potential list of Uses of Results:

• Scoring:  Only one program per reader.  CACs become readers (of different, but closely-related program?)

• Evaluate program-level writing.  Which programs emphasize writing already?  To what degree?  What writing instruction do their students receive?

• Offer Writing Workshops.  Via Academic Support.  Free to students.  Once per semester.

• Offer Writing Workshops.  To instructors.  How to effectively emphasize, e.g. Mechanics.

• Re-Assess Papers.  Once a focal point for improvement has been determined (e.g. Mechanics), re-evaluate (a random selection of?) papers for refined definition of Mechanics, and a more specific list of weaknesses (pick the top-10?).  Devise workshops for students and faculty based on this analysis.

• A more concerted effort is needed to generate broader representation of DL courses (HY & IN).

Questions/Comments

Any feedback on this report is greatly appreciated.

Feel free to send it to:

Don Staub, Director Title III

staubd@carteret.edu

252.222.6010

The Common Rubric (for Gen Ed outcomes assessment)

Filed under: Outcomes Assessment,Rubrics — Donald Staub @ 7:55 am

In 2010-2011, we shifted our Gen Ed Assessment program from one that gathered data in a few, select courses to a broader, program-level approach.  In the past,  Gen Ed (a.k.a. ILLO – institutional level learning outcomes) data had been gathered via relevant courses that had the greatest number of students passing through them.  For instance, we collected assessment data for our Computer Literacy only is CIS 110.  Likewise, Written Communication in ENG 111, and so on.

Of course, this approach had flaws.  Quite often, these courses were taken by students who were early (not late) in their community college experience (which is counter-intuitive to the essence of ILLO assessment; i.e. measuring what students have learned in their time at the College).  In addition, it was not always a representative sample (e.g. not all programs required their students to take ACA 115 – testing ground for Personal Growth and Responsibility).

So, in 2010-2011, we shifted our approach to the program-level in order to address both of these shortcomings.  We began by determining that 10-11 would be a pilot year for this process, and asked programs to focus on assessing only Written Communication. One of the essential pieces of this process (or so we believe) was to develop a common rubric that would be used to score all written samples collected for this assessment.  Once the rubric was agreed upon, we spent the Fall semester (2010) conducting training sessions for readers in how to use it.  In the Spring (May, 2011), we had a scoring day and readers used the rubric to score papers in sets that they had been assigned.  Click here to view a more detailed look at the process.

Our Gen Ed Outcomes Assessment Committee decided that we would begin with the VALUE (valid assessment of learning in undergraduate education) rubrics that had been developed under AACU’s LEAP project.  The Gen Ed committee felt that the Written Communication rubric needed tweaking for our own purposes, so it resulted in this rendition that we have used locally:

click on image to download our rubric

Discussion: Did it work?  Did we improve on the VALUE written communication rubric?  Was it a flawless process?

Theoretically, the rubric should work for just about any written piece.  And, for the most part, it did.  Where it got a little tricky was in how we divvied up the readings.  We tried to ensure that readers were scoring similar sets of papers.  This team read Radiography and Respiratory Therapy papers, where that team read Boat Manufacturing and Cosmetology papers.  However, this did not always work out swimmingly.  So, in some instances, readers were scoring papers from programs that required more academic/rigorous writing skills (e.g. Paralegal) alongside papers from a program such as Basic Law Enforcement Training (BLET).  A paper from Paralegal may warrant a 4 (out of 4) for Content, and so might a paper from BLET.  However, in the reader’s mind, these are still two distinct displays of writing ability.  Our proposed solution is, next time we use a common rubric, to have readers only score papers from one program.

March 3, 2010

Creating Effective Rubrics

Filed under: Rubrics — Donald Staub @ 10:49 am

What is a rubric and why create one?

A rubric is an authentic assessment tool used to measure student’s work.”   To put it another way, a rubric is an instrument that allows both the instructor and student to close the gap of confusion between an assignment and the score assigned to that student’s work (work could be a written assignment or a performance, such as trouble-shooting a diesel engine or an activity such as a clinical experience).  Generally, rubrics are displayed in table form, with criteria on one axis, and points/quality on the other.  Here’s a simple, tasteful example:

Source: Waubonese Community College (click on rubric to access full doc)

The rationale for creating a rubric is that (a) it saves time (there is an investment of time up front, but it pays off in the long run), (b) it clarifies expectations; and, (c) it communicates expectations – i.e. how an instructor will assign a score (“Informed judgement lies between objectivity and subjectivity”).

To develop a rubric, I’m going to refer to the four simple steps outlined by Dannelle Stevens in her rubric-writing workshop at the recent Texas A&M Assessment Conference (click here for my notes on the whole conference…scroll down to Dannelle’s workshop).
  1. Reflecting: Start by brainstorming the descriptors of the concept/activity…what are your objectives for the assignment?
  2. Listing: Next, list all the characteristics/dimensions/criteria; brainstorm the list…culling comes in the next step.
  3. Grouping & Labeling: In this step, put descriptors/criteria in their relevant categories (e.g. see above – Texture, Color, Taste, etc..).  One suggestion that came from the audience: Start with a pile of papers group into good, mediocre, bad…work backwards and build rubric according to the qualities you (don’t) find in each group
  4. Application: The final step – on a grid, fill in highest then lowest, then middle
** Also, make sure that you have the assignment articulated on the rubric.  This increases clarity and lack of confusion.
Just a few of the many great resources for rubrics out there…often you’ll find that they are organized by subject area (e.g. class participation, critical thinking, health sciences, etc..) —

click on image to download ppt

February 21, 2010

Developing a Personal Growth & Responsibility rubric

Filed under: Presentations,Rubrics — Donald Staub @ 6:11 pm

At the Texas A&M Assessment Conference, in the Sunday afternoon workshop on developing rubrics (facilitated by Dannelle Stevens), our group worked on a rubric for Personal Growth & Responsibility.  We didn’t get to the actual rubric…we made it through the first three steps of the process (grouping & labeling), but ran out of time.  This is rough, but it gives us a starting point before we delve into identifying assessments.

Personal Growth & Responsibility

Attendance-

  • Show up
  • Show up on time
  • Show up ready

Knowledge of policies & procedures-

Essential skills –

  • Financial literacy
  • Policies and procedures
  • Soft skills
  • Team work

Responsible behavior-

  • Owning your own behavior
  • Understanding the consequences
  • Being able to prioritize
  • Time management
  • Knowing your limits
  • Self-analysis – knowing how to know your limitations

Personal Growth

  • professional development for students (one day that is dedicated to a particular topic; bring in speakers … esp those who have graduated from the school and broken the mold – students could write a paper or make a presentation)
  • faculty nominations of students who have risen above
  • recognition for things other than simply academics (honor rolls, student gov’t, club participation)
  • other?

May 19, 2009

08-09 Student Survey

Filed under: Outcomes Assessment — Donald Staub @ 12:40 pm

Picture 1

Student Survey 2008-2009

We have recently tabulated the results of the 08-09 student survey.  Very special thanks go out to Marlene McGowan and Bessie Wells for their tremendous efforts on this task.

There were 463 respondents.
There were 52 items that covered (broadly speaking): demographic information, orientation, advising, SER services (including CAPS), technology resources (labs, website, etc.), Academic Support, the Library, TRiO, campus appearance, security, alumni programming, and CCC-student communications.

Methodology
The survey was administered in May, and delivered in two formats: electronic and paper/pencil.  The electronic version was conducted through the college’s elisten survey software.  A general statement was distributed, via Blackboard and through instructors, requesting that students taking online courses please take the survey.  In the end, there were 126 online respondents.  This was self-selected random sampling.

For the paper/pencil version, we randomly selected a group of daytime and evening courses (i.e. daytime – those that met on Tuesdays at 9:00 or 9:30 a.m.; evening – those that met on Tuesdays at 5:00 or 6:00 p.m.).  Of the 28 daytimes courses in this time slot, 21 administered the survey.  Of the 19 evening courses, 9 administered the survey.  There were 337 respondents in this group.

Results
The goal was to survey a cross-section of curricular programs in order to receive broad input.  The top five programs, in terms of numbers of respondents were: AA: 72 (16%); Culinary: 37 (8%); Early Childhood: 29 (6.6%); five programs had 17 respondents (ADN, Business, CIT, Interior Design, Practical Nursing).

All results can be viewed in the accompanying documents.  We are reporting them in the aggregate (i.e. both online and paper responses collectively), as well as separately (online respondents and paper respondents).  To view the comparison of online vs. paper respondents, CLICK HERE.

Perhaps some of the more notable findings were (item statements paraphrased here):
1) I attended orientation: 28% Yes, 72% No
2) If NO, would orientation have helped me be successful (online respondents): 71% No…(paper respondents): 60% Disagree/Strongly Disagree
4)   I’m satisfied with time spent with advisor: 25% Disagree
5)   I’m satisfied with advising: 20% Disagree/Strongly Disagree
11) Semester I took ACA: 1st: 43%; Have not taken: 37.5%
15 a) Utilized Academic Support within last year: 36.5% reported “never”
15 b) Utilized Computer Lab within last year: 28% reported “never”
15 d) Utilized Financial Aid within last year: < 1% reported “not aware” of FA; 45% reported utilizing FA
16) I would recommend ACA to others: 39% Yes (26% No, 35% not taken) [this will be a good baseline to watch next year]
17) Used online tutoring: 7.6% Yes
18) Used Academic Support for tutoring: 34% Yes
19) In which area did you receive tutoring: #1 Math: 58%; #2 English: 26%
25) Received services from CAPS: 31% No; 37% Don’t know about CAPS
27) Have used student email: 80%
29 a) Used website to search for classes: 87.5%
29 c) Use website to check semester start dates: 86%
30) Website is useful: 47.2% Strongly Agree; 47.2% Agree
31) Satisfied with overall quality of website: 43.4% Strongly Agree; 46.7% Agree
34) Do you qualify for support from TRiO: 59% Not Aware
37) Campus is attractive and neat: 42.4% Strongly Agree; 55.4% Agree
38) Buildings and classrooms are clean: 37% Strongly Agree; 57% Agree
40) Satisfied with security: 36% Strongly Agree; 59% Agree
42) I am aware of alumni program: 67.5% No
43) I would be interested in remaining connected after I leave: 67% Yes.
44) Top three ways to receive registration information (out of 9 choices): website, student email, advisor…(Beacon – #6 out of 9)
48) Home internet connection: 56% Cable; 33% DSL; 2.3% dial-up; 8.4% none

February 23, 2009

9th Annual Texas A&M Assessment Conference

aggie-bonfire-memorialAggie Bonfire Memorial

(Texas A&M…a marker for each student who lost her/his life, positioned so that they are looking toward that student’s hometown)

9th Annual Assessment Conference
Texas A&M University
Feb 22-24, 2009
Conference Title:  Using Assessment to Demonstrate Improvement

I’m off to College Station, Texas to take part in this well-known, well-established conference on assessment in higher education.  I am really looking forward to it because of the array of sessions that have direct relevance to the assessment activities that we are putting in place (through Title III) at Carteret Community College.  I’m going to use this space over the next couple of days to report – both during and after – the sessions I am attending.    With a look at the agenda, here are the sessions that I plan on attending.

Sunday February 22
Pre-Conference Workshop (2:00-5:00)
Program Assessment: What to Know, Do, and Avoid
Instructor: Dr. Susan Hatfield; Winona State University

Why I’ve signed up for this workshop –
The brief description in the conference program promotes the workshop as a review of “best practices and bad ideas in program level assessment,” with an opportunity for participants to “consider what to assess, how to do it, and what it all might mean.”  This description is attractive to me on a number of levels.

First, I have spent the last two years working on the Program Review process at CCC, and a lot of it has been learn-as-you-go.  This workshop will (I hope), give me an opportunity to explore this process (most likely in part, not in whole) in retrospect in a setting where ideas are shared by people from across the country who are in my shoes.

Second, if it only focuses on Program Outcomes, that’s OK as well.  This is still a critical piece of the Program Review process, and it will play a larger role as we focus more creating a logical connection between program outcomes and funding.

Finally, I am looking forward to this workshop because, as I mentioned above, I am keen on hearing how other schools are approaching this task – how they are defining Program Outcomes, how they are assessing them, and how they are meeting the challenges inherent in establishing such a systematic program.

My post-session impressions –
This was not as much about Program Review as it was about program level learning outcomes (PLLOs). Indeed, she spent a few minutes arguing how PLLOs were not program assessment [I can agree with the not program assessment part, but she did make it sound like never the twain shall meet…which I believe is inaccurate…PLLOs can very easily be a critical component of the review process…but that’s an argument for another day].

The workshop presenter laid out a coherent description of why PLLOs, how to write PLLOs, and how to assess them.  She talked primarily from the perspective of one who must inspire faculty to action regarding PLLOs, including making persuasive speeches to faculty regarding the connections between, for instance, his/her activities and the program’s need/desire to receive funding for a particular project.

We also talked a lot during this two-hour period about writing useful/articulate/concise outcomes.  What Susan spent a good deal of time on was curriculum mapping and rubrics.  She stressed the need to ensure that Program student learning outcomes were taught and assessed across the program. In connection to this, she talked about orphan outcomes or orphan courses – in either case, the PLLO is not assessed in a particular course (vertically – when looking at the completed map) or across a program (horizontally).  We are doing this to some extent, vis a vis the Gen Ed outcomes (ILLOs), and under Dr. Emory’s leadership and tutelage.

For me, there were two important take-aways from this session: 1) the discussion on rubrics.  Susan talked about different types of rubrics (e.g. Analytic & Holistic…summative and developmental), but she also gave us a great resource, which is the link to her office’s website that has a great repository of rubric samples:
http://www.winona.edu/air/rubrics.htm [While you’re at it, check out their Resources page…a little bit of everything IE, whether you are IE or are a part of the process:
http://www.winona.edu/air/resources.htm ].

2) The second take-away was, as it always is at conferences (which is why we go in the first place), the others at the round table.  The 200(?) people in the room sat at round tables, where they had randomly chosen to sit.  There were folks from Virginia, South Carolina, and Texas at our table. Rich discussions around all of the talking points took place. But the real gem for me was meeting the Assessment Specialist from the Office of Institutional Effectiveness at the University of Richmond (VA).  Ashley showed me a copy of their Assessment Workbook (and told me how to find download it from their website: http://oir.richmond.edu/Assessment.htm (it’s under “Assessment of Academic Programs”).   Their plan is similar to the one we’ve developed, but I like the extra-added dimension of dissemination of findings.  In other words, let’s not let this sit on a shelf…let’s have a discussion.

Below is a list of the sessions I intend to attend for the rest of the conference.  I’ll be checking in periodically and filling in the spaces with information about the sessions, along with my thoughts and Take-Aways.

Monday February 23

Plenary Speaker (9:00-10:30)
Mary Allen,

Prof. Emeritus, California State U.
Former Director, CSU System Institute for Teaching & Learning

Topic:  Assessing General Education Programs

Description: designing and implementing effective General Education assessment programs.

Powerpoint and handout can be downloaded HERE.

Dr. Allen provided an engaging, 1.5 hour presentation on developing an effective Gen Ed assessment plan on campus.  She provided many definitions around assessment, talking about Program Assessment, learning outcomes, the curriculum map/matrix, assessment plans, and rubrics.

Much of Dr. Allen talked about was familiar, and on the surface, not new.  However, for each of the areas she spoken on, she would add an interesting perspective or dimension.  When she talked about assessment (formative vs. summative) of learning outcomes, she framed it as a focus on “deep and lasting learning” versus “shallow and short-term.”  She stressed that assessment plans be: “Meaningful, Manageable, and Sustainable.”  The meaningful  and manageable seemed to be the thread that ran through the conference as most of the sessions addressed the need for buy-in if an assessment plan were to be successful.  Dr. Allen really emphasized these two points in order to make the process sustainable.

Other important points she made (for me, at least) were:
• Assessment plans should be multi-year and reasonable (again, the notion of creating buy-in).
• Assessment plans should be perceived as “a series of incremental improvements to the program.”  This is critical because often people want to large gains, quick.  Slow and steady wins the race was her message here.
• And the take away message for me was: When asking faculty to help read and score papers (for ILLOs, for instance), keep the numbers of papers low, get them in a room for an afternoon, train them, have them score the papers, discuss results (and use of) and let them go.  Get it done in the shortest time possible, and make sure they feel that they have done something meaningful.
Session 1 (10:45-11:45)
Ending the Paper Chase: Moving from Paper to Electronic Assessment Reporting

SACS Roundtable
12:00-1:30
General Education Competencies: Reviewer Expectations

Session 2 (1:30-2:30)
Building a Technology-Supported Culture of Assessment: Software Readiness Considerations

Session 3 (2:45-3:45)
Vertical Assessment: From Assent/Ascent to Descent/Dissent

Session 4 (4:00 – 5:00)
Achieving Institutional Effectiveness with a Multi-Layer Strategic Planning Process

Alternative:  Assessment to Promote and Sustain Online Programs

Communities of Practice Dinner
Classroom & Program Assessment

Session 5 (8:00-9:00)
QEP

David Carter of SACS, spoke on the QEP.
He talked primarily about the pitfalls that schools experience with their QEP plans.  I thought this one-hour presentation was both useful and interesting.  Particulary from the perspective of a school who is submitting it’s QEP plans within the next week (we hope!).

You can simply check out his powerpoint slides at his website.  Or, if you want the full multi-media experience, I did record his talk.  You can listen to it at the CCC iTunes site (or download it to your ipod and listen to it during your next jog).

To get the audio, go to:  The CCC iTunes Site
Scroll down to the Title III repository (under CCC Community).  Click on the Title III Grant icon.  Then, click on the Title III Grant icon again and it will take you to a list of audio files.  The last one, the “QAP”  is the one you want.  Click on it to listen, or click on “get” to download it to your computer.

p.s. if you go Dr. Carter’s website, you can download his lunch time plenary that he delivered on the day before on Gen Ed Competencies.  I don’t have the audio for this one.

Plenary (9:15-10:30)
Gloria Rogers
Using Assessment to Drive Improvement without Driving Faculty Crazy!

March 3, 2007

Institutional Learning Outcomes

We’ve spent the last two weeks focusing on Institutional Learning Outcomes, and I’m extremely proud to report that college-wide participation has been fantastic. We started by holding three “forums” to discuss the importance of ILOs, outline the process of choosing ILOs for CCC, and then explore examples of ILOs that other schools had implemented. All three forums were standing room only!

We then emailed out a list of 16 potential ILOs for CCC, and asked everyone to pick their top five. We had a tremendous response rate…over 80(!!) faculty and staff responded to our call to action by 5pm Friday. Click here for a complete list of potential ILOs, along with brief descriptors of each.

Here are the final numbers (with no hanging chads):

The Top-Five ILOs for CCC:
3. Communication (76) (many thought that #16-Writing should be included in this category)
4. Critical Thinking (70)
5. Computer Literacy (59)
11. Personal Growth & Responsibility (50)
10. Information Literacy (38)

The full run down:
1. Act (17) (one respondent wanted this to include “ethically”)
2. Apply (11)
3. Communication (76)
4. Critical Thinking (70)
5. Computer Literacy (59)
6. Cultural & Social Understanding (17)
7. Environmental Stewardship (17)
8. Explore the Fine Arts & Humanities (6)
9. Globalization (7)
10. Information Literacy (38)
11. Personal Growth & Responsibility (50)
12. Quantitative Reasoning (16)
13. Research (9)
14. Scientific Reasoning (5)
15. Value (27)
16. Writing (28)
17. Write- ins (2)
* CCC graduates will have basic 21st Century job skills including adaptability, flexibility, resiliency, and the ability to accept ambiguity;
* Compassion: students will leave this institution with compassion for their fellow man, which will motivate them to think of the needs of many; before and as often as they think of their needs.

Next up? We will identify a committee that will work on defining and refining the top-five ILOs. From these definitions, rubrics and assessment plans will be developed to take this process to the next level. Stay tuned!

Blog at WordPress.com.