Carteret Community College Title III Grant

August 17, 2009

Numbers Talk…No. 2: Finding meaning

Filed under: Cindy Schersching's white papers — cynthea1 @ 7:57 pm

Download a pdf of this white paper by CLICKING HERE.

A key challenge to understanding the metrics of retention, persistence and success (as well as other metrics) associated with a specific course, a program, and/or segment of the student population is to find meaning in the numbers.  Commonly, meaning is found by comparing observed metrics with a standard or benchmark.   The challenge is to determine which comparisons yield the most valuable and instructive information.

Benchmarks can be sourced in a variety of ways.    Let’s review appropriate possible benchmarks in the context of retention and educational institutions.

1.   A benchmark can reflect internal institutional standards. For example, some institutions may refuse to accept retention rates less than 80%; others may find 60% acceptable.  These benchmarks reflect the organizations’ heritage, judgment of key decision makers, financial determinants, etc.

2.  A benchmark may be the best performing institution in the ‘universe’ of all U.S. 2-year public colleges. If, for example, a two-year public college with characteristics similar to Carteret had a retention rate of 85%, we may want to base our progress against this metric.

3.  By definition, an expectation is an average.   Therefore, another benchmark can be an average across years for a specific program. A comparison of each individual year to this average across years will indicate how consistent the retention rate has been.  The challenge is to ‘beat the average.’

If a review of each individual year relative to the average highlights a significantly positive year, additional analyses will uncover the combination of characteristics that differentiates that year from the years that were at-to-below average.

The downside to this type of benchmark is that it is narrow and self-defined.

4.  Another benchmark can be created by averaging across all of the programs with the same base of students (first time, full and part time, degree seeking cohorts) within a year.  This benchmark can indicate whether or not the specific program is different from other programs taken by similar students.   If the program is a stand-out, efforts should primarily focus on analyses of institutional characteristics to identify what is contributing to this program’s success.

Of benchmarks, this approach is quite informative.

5.   An even better standard of comparison is one based on the entire college ‘universe.’ Retention rates based on the diversity of the student population most clearly highlight outstanding programs/majors as well as those underperforming.  These ‘deviations’ from the average based on the total student ‘universe’ can guide further investigations.   Learning what contributes to outstanding retention rates in one program can be leveraged across other programs to raise overall performance.

This benchmark provides very useful direction.   Comparing the same program but across different sample definitions suggests there are student characteristics – either separately or interacting with institutional factors – that are contributing to the observed retention levels.

For the Title III investigation of retention, we are using options 3, 4 and 5 (where we have data) to identify what programs are performing to expectation and to highlight those who are more successful in keeping students.

When using any benchmarks, keep these considerations in mind:

  • Learning is strengthened when comparisons to different benchmarks suggest the same outcome.
  • Meaningful benchmarks are created on robust sample sizes.  Ideally, the base for these benchmarks is 30+ students.
  • It is common to show comparisons for a specific program/major to a benchmark in terms of an index.  The index is created by dividing one percentage by another.  A rule of thumb is those indices of 80 and below and 120 and higher suggest the specific program/major is in some way ‘significantly’ different from the ‘universe’ against which it is being measured.   These indices align roughly with 2+ standard deviations (see below).

To give clarity to the idea of benchmarking, let’s take a specific example from actual data associated with the first time, full and part time, degree seeking cohorts.   We will look at the data for each of the GOT program.   Retention rates are run for fall to fall and fall to spring by years: 2005-2006, 2006-2007 and 2007-2008.  (At the time of this initial analysis fall to fall 2008-2009 data are not available.).

Retention Rates

Base:  first time, full and part time, degree seeking cohorts

Fall to Fall 2005-2006 2006-2007 2007-2008
% % %
GOT 48.2 45.1 39.3

To date, the College has not set benchmarks as defined in options one (1) and two (2).

However, we can calculate the average retention rate for the GOT program from 2005 to 2008 (benchmark option 3).

Retention Rates

Base:  first time, full and part time, degree seeking cohorts

Fall to Fall 2005-2006 2006-2007 2007-2008 Average across years
% % % %
GOT 48.2 45.1 39.3 44.2
Index to average 109 102 89
It is evident there is a good deal of consistency in the performance of students in this program.   Averaging retention rates across all programs that have this cohort base yields the following comparisons (benchmark option 4):
Retention Rates

Base:  first time, full and part time, degree seeking cohorts

Fall to Fall 2005-2006 2006-2007 2007-2008
% % %
GOT 48.2 45.1 39.3
Average across programs 53.7 48.8 52.2
Index to average by year 90 92 75
What is clear from this comparison is that compared to other programs, retention rates are somewhat below expectation from 2005 to 2007.  Retention is ‘significantly’ below average in the most recent years of 2007-2008.   Not only does this data pattern suggest that something had changed in 2007-2008, but institutional factors such as methodology, faculty (adjunct v. instructor), credit loads, etc. are likely contributing to the poor retention rates.

Lastly, we look to the universe of college students for a meaningful comparison (benchmark option 5).

Retention Rates
Fall to Fall 2005-2006 2006-2007 2007-2008
% % %
GOT 48.2 45.1 39.3
Base:  First time, full and part time, degree seeking cohorts
Average for program 31.4 34.6 25.6
Base:  All students
Index to average by year 154 130 154
The difference that appears to drive the above ‘expectation’ retention rates in this comparison suggest that cohort groups are ‘significantly’ more likely to stay in school.  We can hypothesize that the cohesiveness, peer pressure, and structure offered by cohorts are differences that make a difference.  To the extent possible, creating ‘bonded’ groups may prove particularly useful in keeping students in school.

Submitted by Cindy Schersching, PhD.  Title III


Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at