Carteret Community College Title III Grant

September 27, 2010

Retaining Students in Online Education

Filed under: Conferences,Faculty Professional Development,Retention Issues — Donald Staub @ 6:46 am

Mary Walton (Division Director of Business Technologies), Patrick Keough (Director of Distance Learning) and myself are in Atlanta from Monday to Wednesday of this week, attending Academic Impressions’ workshop: Retaining Students in Online Education.” We are here to spend the next couple of days learning about and planning “…methods to track students, document progress, and put specific practices in place to ensure success,” (from the brochure).

We will be posting our learnings and impressions throughout the workshop.  As a quick overview, here’s what’s on the agenda:

  1. Rethinking Retention: “…with accountability and graduation rates becoming major issues, it is even more important to address retention in online education.”
  2. Identifying Needs: There are usually specific reasons why [online]  students enroll; being able to identify such reasons and respond appropriately can make or break a program.
  3. Developing Dashboards for Data Management: “How can you monitor progress and performance within a student’s lifecycle at your institution?”
  4. Measuring Retention Success: “Identify the significant characteristics of your student population and clarify retention goals at each step in the process from application to the end of the first term.”
  5. Critical Support Services: “…Institutions are challenged to integrate a wide range of student services to promote academic achievement and retention”
  6. Early Engagement Through Online First-Year Experiences: “… methods to engage and connect online students from the first point of contact.”
  7. The Role of Faculty and Academic Advisors in online Student Retention
  8. Delivering Support Services Online

Mark Parker and Kristen Betts led the first day’s sessions on:

  1. Rethinking Retention,
  2. Identifying Needs, and
  3. Dashboards (for data display).

My take-aways from these first three sessions:

  • First and foremost, when it comes to retention in DL, we may not be perfect, but we are doing a lot of good things.  We are providing boatloads of professional development to our faculty (thanks Title III), we are providing more and more services to our students in a cost-effective manner, we are assessing what we are doing, and we are providing training to our students to be better online learners, and we are coming to conferences such as this to gather information.
  • For me, one of the more interesting topics of discussion on the day was around “managing expectations of our students.”  The key point being – we can’t do it all, all the time.  Therefore, we need to ensure that our students understand what to expect…when they’ll get a response.  As one of our colleagues put it: “Is the service reliable…’Tell me what’s available and when it’s available.’”
  • One way to think about providing services to more students would be to collaborate with other colleges.  One suggestion was to form a consortium (as they have done in Mass.) to provide online tutoring.  Pooling of resources is a good thing.
  • How about this idea that was described by a colleague here.  At their school, they use Emoticons that students send to the help-desk. An automated email is generated and sent to students that says, “do you want me to intervene?” (this engages the student, and doesn’t require the time and effort of staff, until necessary).
  • I thought this was a good idea that one school has implemented: For all first-year courses that are taught online, phase in the use of technology. Don’t present all the bells and whistles from the outset.  Let them become comfortable with the technology in phases.
  • And, in the discussion on Dashboards,  the notion that they are not just for the leadership is obvious, but often overlooked.  As Kristen Betts pointed out: “optimize your dashboards for your division directors and program chairs”… what she referred to as “micro dashboards.”

4. Measuring Retention Success

This data-rich session was facilitated by Bill Bloemer – Research Associate at the Center for Online Learning Research and Services at the University of Illinois, Springfield (UIS)  [The director of COLRS, Ray Schroeder has a blog about online learning]. Some of the interesting discussion points that came out of this session include:

  • Students at UIS are hoarding courses…then they drop to fit their needs.  “Excessive Churn” from hoarding at the beginning of the semester is wreaking havoc with gathering true enrollment data.  There is also the issue of students not getting what they want because someone else has “gamed” the system and has grabbed a section that a student may truly need.
  • Look at withdrawals by registration date.  Is LIFO true? – one school at the conference claims that their look at their data says it isn’t. Instead, they that those students arrive focused and intent on completing a course.
  • Is there a connection between age and withdrawals?  Data that Bill showed at UIS suggests that there is.
  • Is it possible, utilizing Academic Analytics (click for Bill’s recommended reading) to predict who will get an F/W in an online course? Bill led a lengthy discussion on a binary logistical regression model he had been using to look at those students who had earned an F.   He worked backwards from this population to identify a common set of prediction variables.  What he found was that, at best, he could predict that slightly over 12% of the students in a course that will W/Fail.  Some of the “best” indicators to get him to this level of success are:
  1. the individual courses (those that have traditionally high rates of W/F;  Focus on the outliers…track only the problem courses)
  2. the student’s prior GPA
  3. prior hours resulting in an F/W (“Failure breeds failure.”  If you fail once, chances are, you’ll do it again.)
  4. student’s major
  • From our Australian colleagues (UNE-Au), Rhonda and Greg: “We take the student perspective [vis a vis] course enrollment vs. student goal success.  You may lose them in the short term, but let’s focus on keeping them for the long term.”  The interventions and practices they have designed work to this end.
  • Another insightful question worth posing (and whose answer is well worth promoting in order to get the attention of administration): What is the cost of increasing retention by 1%?

5.  Critical Support Services

  • Kathleen Polley, Director, Online RN-BS Program, University of Massachusetts Boston
  • The change has taken place from a campus-centered to a consumer-centered model where control is shared with the student.
  • Critical Services – what are the “stressors” for your population?  What’s their skill set?  How do you support them?  Use this to identify and develop your “critical services.”
  • One (successful) way that was suggested to increase Engagement: the weekly online chat – not required, but it’s used to talk about issues that are on the minds of the students in the program.  Kathleen pointed out that while online is supposed to equal asynchronous, giving equity to all students, she still has very high rates of participation in this synchronous chat.
  • Here are some poignant thoughts on Expectations:
  1. Don’t tell students  you will do things that you can’t
  2. You have to tell students what to expect from tutoring
  3. Every interaction is a “trust building” opportunity

Kathleen also talked about a successful Virtual Learning Community w/in BB…let the students use it themselves as a place to meet and discuss.  This has been a good way to build engagement among her students.

6. Early Engagement Through Online First-Year Experiences: “… methods to engage and connect online students from the first point of contact.”

  • Kristen Betts, Associate Clinical Professor, School of Education’s Higher Education Program, Drexel University
  • The average percentage of online of a student’s courseload is predicted to be 60% by 2020
  • She also suggested that we straight-out ask our students (in the student survey): Are you thinking about transferring/leaving?
  • She also argued that orientation is a process, not an event.  Their orientation is 75 minutes total…each person talks, then leaves but it continues throughout the year via their FYE.

Their FYE is event-focused…Key events:

  • Tea/wine & cheese party (they do this with a virtual component)
  • Invited speakers
  • Alumni speakers (work with John Smith/Wanda; offer courses online to alumni)

7. The Role of Faculty and Staff in Online Student Retention

  • Kathleen Polley, Director, Online RN-BS Program, University of Massachusetts Boston
  • “An assessment of student engagement must encompass the policies and practices that institutions use to induce students to take part in these activities.”
  • Not everyone (students) need to be socially connected.
  • Faculty engagement is key for student engagement….Key Consideration for faculty: “Satisfaction with Transparency”  need to know where senior management is going…Faculty satisfaction with policies
  • Kathleen suggested that during Week 4 of course, have students provide a Formative Evaluation (e.g. What have you learned so far? What would you still like to learn?)
  • Does your school have an Online Readiness Assessment? What does ?  Or, How reliable is the assessment?
  • Key indicators for student engagement: how frequently they log in, how often they read something before posting.
  • “How can we assess how often a student is engaging in the online material?”

8. Delivering Support Services Online

  • Kathleen Polley, Director, Online RN-BS Program, University of Massachusetts Boston
  • Admissions: do we really need everything we are asking for?
  • Have technology scaffolding throughout the semester in online courses [should we create technology CLLOs for each online course?]
  • U of New England – Australia: Check their library website for learning skills training (online).
  • Students look at the way you deliver your services and equate that with the way that you deliver instruction (i.e. is it quality?)

9. Benchmarking

  • Bill Bloemer, Research Associate & Dean Emeritus of Arts & Sciences, University of Illinois Springfield
  • Data point:  Terms since last registered.
  • Does your degree-audit system talk with your data warehouse?
  • SP-FA retention vs. FA-SP retention
  • What are the completion/graduation rates of those who are online-heavy in course loads?
  • “Term-earned hours” is a better predictor than “attempted hours.”
  • Course evaluation question: What is your expected grade?
  • On-ground courses using online evaluations increased overall return rate.
  • Bb has anonymous evaluation feature
  • Use online evaluation results as a component of “evaluate instructional modalities” in program review
  • Are there online-specific questions on CCSSE?



January 19, 2010

Numbers Talk…No. 5: Class size, GPA and persistence – Is there a relationship?

Filed under: Cindy Schersching's white papers — cynthea1 @ 8:17 am

Click here to download a PDF of this report

It is commonly believed that smaller class sizes are positively related to stronger student performance.  While it is not clear what the optimal class size should be or if it varies by the nature of the course material, intuitively, there are several reasons that argue for the general truth of that statement, e.g.,

  • Each student gets more attention from the instructor.
  • The pace of the class can more easily be moderated to accommodate fewer students.
  • In a more intimate environment, students are likely to be less intimidated and more likely to ask questions/share perspectives.

We looked for evidence from our data to support – or not – this relationship.  Information from seven programs of study averaged across available data from Fall 2005 through Summer 2009 was reviewed.   Though they were not randomly selected, these programs do represent a diversified set of courses.

Across programs, no one program had more than 36 students in any one course.    For this analysis, the distribution of students was arbitrarily trisected such that segments with up to 12 students are compared to segments of 13 to 24 and 25 to 36 students, as appropriate (i.e., not all programs had 36 students).

Additionally, independent study courses were deleted from the analyses.   Independent study courses by definition have a very small number of students with individualized study plans which may bias the more general look at the effect of class size.

Four metrics were calculated for each class size segment within each program:

  • Average GPA,
  • The percentage withdrawing (student or instructor initiated),
  • Percentage success (the proportion of students who received a grade of A, B or C),
  • Persistence rate (proportion of students who completed the course regardless of grade).

The number of courses in each class size segment is also noted.  All details appear in the tables at the end of this analysis.

The specific hypotheses we are testing with this analysis are these:

  • Across programs, smaller classes (those with 12 and fewer students) are associated with higher average GPAs than classes with more than 12 students.
  • Across programs, smaller classes (those with 12 and fewer students) are also associated with higher success rates and higher rates of persistence than classes with more than 12 students.
  • Across programs, smaller classes (those with 12 and fewer students) are associated with lower rates of withdrawal than classes with more than 12 students.

The following tables are color coded so that the outcomes are apparent.   The blue shading indicates values decline as class size increases.   The yellow shading indicates values increase as class size increases.  (Note:  These metrics tend to be intercorrelated.)

Across the aggregate of courses within each of the 7 programs…

  • GPAs in smaller classes were highest in 5 of the 7 programs.
  • Success rates in smaller classes were highest in 6 of the 7 programs.
  • Persistence rates in smaller classes were highest in 7 of the 7 programs.
  • Rates of withdrawal are lowest in smaller classes in 7 of the 7 programs.

This is a compelling set of evidence in support of the hypotheses.   However, not all of these relationships are perfectly linear when all of the data are reviewed.

  • For example, within the Sciences program, persistence rates decline when the smallest class size (12 students or less) is compared to the next class segment 13 to 24), but increase when the class segment of 13 to 24 is compared to the largest segment of 25+.
  • All of the metrics calculated for the Business Administration program also show non linear patterns.

It is not clear why the patterns deviate for these two programs.  I welcome any discussion around these anomalies; insights are often hidden within them.

Earlier blogs and discussion confirm the belief that persistence and retention are positively related, but it is unclear how strong that relationship is.  We can only suggest (as we have before) that increases in persistence will lead to increases in retention.

In this data set, the average increase in persistence when class sizes of 12 are compared to those of 13 to 24 is 3.9 pts.  While there is no guarantee that retention would increase 3.9 points with class sizes of 12 or less, a cost-benefit analysis would determine what it would cost to reduce the average class size and what increase in retention rate would justify the expenditure.

Submitted by Cindy Schersching, PhD, January 16, 2010

Key metrics by class size across select programs (Fall 2005 – Summer 2009)

Legend:  Blue shading indicates values decline as class size increases.   Yellow shading indicates values increase as class size increases.

*This definition of OST includes BUS 151, CTS 125, CTS130, CS 165 and excludes OST 080 AND 131 per the program head.

December 2, 2009

Numbers Talk…No.4: How are we doing?

Filed under: Cindy Schersching's white papers — cynthea1 @ 2:44 pm

It is frequently useful to benchmark key metrics to a known universe of similar organizations.

Benchmarking can tell us if we are moving in the right direction, how much ‘head room’ we have to grow, and, alternatively, whether there are areas that need more resource to bring performance up to par.
A second benefit of using benchmarks is that – with some assumptions – we can infer performance numbers that we are unable to generate.
For this exercise, we will use summary numbers published by Noel-Levitz, a respected agency with known expertise in educational/institutional research, and compare them to similar numbers generated for Carteret Community College (CCC). We will begin with the comparison of retention and the percent of hours completed relative to the number attempted. CCC metrics are run on the same sample definition and time frame as the benchmark numbers.

Retention Rates and Percentage of Hours Completed
Base: Cohorts, first time, full time, degree seeking

Retention Rates Percentage of Hours Completed divided by Total Hours Attempted
Noel-Levitz 2-Year Public Institutions Carteret Community College Index:

Noel-Levitz 2-Year Public Institutions Carteret Community College Index:


2007-2008 55.8% 59.3% 106 75.0% 77.6% 103

CCC compares quite favorably with these key national numbers; we are in the top half of the sample distribution of 2-year public institutions.

Noel-Levitz also generates a persistence rate for 2-year public institutions. For this time period and sample definition, it is 71.3%. For a variety of reasons, we are unable to get a comparable number for CCC.

The response to the last blog regarding persistence and retention rates suggests the relationship between these two metrics is likely a strong – though not perfect – positive one. Noel-Levitz also concludes that ’ term-to-term persistence benchmarks within a given academic year are natural predictors of year-to-year retention rates for a cohort of students. As such, these benchmarks serve as early indicators of attrition, facilitating intervention’ (2009 Noel-Levitz, Inc. • Benchmark Research Study Conducted Fall 2008—Mid-Year Retention Indicators Report).

Using these assumptions, a conservative reasonable estimate of persistence suggests CCC is also in the top half of the distribution.

When 2008-2009 estimates of these same metrics become available, we will be able to re-apply/re-test our assumptions and update these comparisons.

Submitted by Cindy Schersching, PhD, December 2, 2009

August 20, 2009

Numbers Talk…No. 3: Retention and Persistence – Is there a connection?

Filed under: Cindy Schersching's white papers — cynthea1 @ 3:37 pm

Retention rates are the proportion of students who graduate or return to their program the following semester.  Usually, retention is calculated from Fall to Fall, but Fall to Spring  metrics can be generated.   Retention is a key metric used to compare college performance throughout the US.

Persistence refers to the proportion of students who complete a course.

Typically, retention rates are reported for programs; persistence is reported for courses.

While it makes intuitive sense that persistence would be related to retention, this is challenging to prove.

Therefore, in the true spirit of blogging, I’d like to entertain all ideas regarding the hypothesis that these measures are related.

  • If you don’t think they are related, let me hear why.
  • If you think retention and persistence are related, let me know that and how strong you think the relationship is (i.e., if you know one measure, you know the other; persistence explains only part of retention-other factors are more important, etc.)
  • And if you think there is a relationship, how can we show it?

Why would a connection between retention and persistence be useful to demonstrate?

When we know which relationships have the most influence on retention, we are more likely to influence it and therefore, improve retention rates.  Conceptually, persistence is similar to retention in that the student completes something.

Further, persistence may be easier to influence than retention since it is measured at the course level.

Both are important metrics.   If retention and persistence are positively related (i.e., as one improves, so does the other), efforts to influence one metric will positively impact the other.  This will enhance our understanding of both program and course level performance using the tools already in place and give us confidence that we are using our resources in the best way.

Submitted by Cindy Schersching, PhD, August 20, 2009

August 17, 2009

Numbers Talk…No. 2: Finding meaning

Filed under: Cindy Schersching's white papers — cynthea1 @ 7:57 pm

Download a pdf of this white paper by CLICKING HERE.

A key challenge to understanding the metrics of retention, persistence and success (as well as other metrics) associated with a specific course, a program, and/or segment of the student population is to find meaning in the numbers.  Commonly, meaning is found by comparing observed metrics with a standard or benchmark.   The challenge is to determine which comparisons yield the most valuable and instructive information.

Benchmarks can be sourced in a variety of ways.    Let’s review appropriate possible benchmarks in the context of retention and educational institutions.

1.   A benchmark can reflect internal institutional standards. For example, some institutions may refuse to accept retention rates less than 80%; others may find 60% acceptable.  These benchmarks reflect the organizations’ heritage, judgment of key decision makers, financial determinants, etc.

2.  A benchmark may be the best performing institution in the ‘universe’ of all U.S. 2-year public colleges. If, for example, a two-year public college with characteristics similar to Carteret had a retention rate of 85%, we may want to base our progress against this metric.

3.  By definition, an expectation is an average.   Therefore, another benchmark can be an average across years for a specific program. A comparison of each individual year to this average across years will indicate how consistent the retention rate has been.  The challenge is to ‘beat the average.’

If a review of each individual year relative to the average highlights a significantly positive year, additional analyses will uncover the combination of characteristics that differentiates that year from the years that were at-to-below average.

The downside to this type of benchmark is that it is narrow and self-defined.

4.  Another benchmark can be created by averaging across all of the programs with the same base of students (first time, full and part time, degree seeking cohorts) within a year.  This benchmark can indicate whether or not the specific program is different from other programs taken by similar students.   If the program is a stand-out, efforts should primarily focus on analyses of institutional characteristics to identify what is contributing to this program’s success.

Of benchmarks, this approach is quite informative.

5.   An even better standard of comparison is one based on the entire college ‘universe.’ Retention rates based on the diversity of the student population most clearly highlight outstanding programs/majors as well as those underperforming.  These ‘deviations’ from the average based on the total student ‘universe’ can guide further investigations.   Learning what contributes to outstanding retention rates in one program can be leveraged across other programs to raise overall performance.

This benchmark provides very useful direction.   Comparing the same program but across different sample definitions suggests there are student characteristics – either separately or interacting with institutional factors – that are contributing to the observed retention levels.

For the Title III investigation of retention, we are using options 3, 4 and 5 (where we have data) to identify what programs are performing to expectation and to highlight those who are more successful in keeping students.

When using any benchmarks, keep these considerations in mind:

  • Learning is strengthened when comparisons to different benchmarks suggest the same outcome.
  • Meaningful benchmarks are created on robust sample sizes.  Ideally, the base for these benchmarks is 30+ students.
  • It is common to show comparisons for a specific program/major to a benchmark in terms of an index.  The index is created by dividing one percentage by another.  A rule of thumb is those indices of 80 and below and 120 and higher suggest the specific program/major is in some way ‘significantly’ different from the ‘universe’ against which it is being measured.   These indices align roughly with 2+ standard deviations (see below).

To give clarity to the idea of benchmarking, let’s take a specific example from actual data associated with the first time, full and part time, degree seeking cohorts.   We will look at the data for each of the GOT program.   Retention rates are run for fall to fall and fall to spring by years: 2005-2006, 2006-2007 and 2007-2008.  (At the time of this initial analysis fall to fall 2008-2009 data are not available.).

Retention Rates

Base:  first time, full and part time, degree seeking cohorts

Fall to Fall 2005-2006 2006-2007 2007-2008
% % %
GOT 48.2 45.1 39.3

To date, the College has not set benchmarks as defined in options one (1) and two (2).

However, we can calculate the average retention rate for the GOT program from 2005 to 2008 (benchmark option 3).

Retention Rates

Base:  first time, full and part time, degree seeking cohorts

Fall to Fall 2005-2006 2006-2007 2007-2008 Average across years
% % % %
GOT 48.2 45.1 39.3 44.2
Index to average 109 102 89
It is evident there is a good deal of consistency in the performance of students in this program.   Averaging retention rates across all programs that have this cohort base yields the following comparisons (benchmark option 4):
Retention Rates

Base:  first time, full and part time, degree seeking cohorts

Fall to Fall 2005-2006 2006-2007 2007-2008
% % %
GOT 48.2 45.1 39.3
Average across programs 53.7 48.8 52.2
Index to average by year 90 92 75
What is clear from this comparison is that compared to other programs, retention rates are somewhat below expectation from 2005 to 2007.  Retention is ‘significantly’ below average in the most recent years of 2007-2008.   Not only does this data pattern suggest that something had changed in 2007-2008, but institutional factors such as methodology, faculty (adjunct v. instructor), credit loads, etc. are likely contributing to the poor retention rates.

Lastly, we look to the universe of college students for a meaningful comparison (benchmark option 5).

Retention Rates
Fall to Fall 2005-2006 2006-2007 2007-2008
% % %
GOT 48.2 45.1 39.3
Base:  First time, full and part time, degree seeking cohorts
Average for program 31.4 34.6 25.6
Base:  All students
Index to average by year 154 130 154
The difference that appears to drive the above ‘expectation’ retention rates in this comparison suggest that cohort groups are ‘significantly’ more likely to stay in school.  We can hypothesize that the cohesiveness, peer pressure, and structure offered by cohorts are differences that make a difference.  To the extent possible, creating ‘bonded’ groups may prove particularly useful in keeping students in school.

Submitted by Cindy Schersching, PhD.  Title III

July 19, 2009

Numbers Talk

Filed under: Cindy Schersching's white papers — cynthea1 @ 10:25 pm

Download a pdf of this white paper by CLICKING HERE.

This blog serves multiple purposes:

  • It provides a forum for sharing the learning from the analyses of retention data.
  • It allows each of you time to review these analyses and pose any questions/thoughts you might have
  • It documents the work being done under the Title III grant.

The key objective of the data analyses is to provide an understanding of the factors that impact retention rates.   The retention issue is multi-faceted; data will be analyzed from a variety of perspectives.  It is expected that many of the analyses will show similar results (convergence);  when the same result is supported by a variety of analyses,  the stronger the result and more confidence we have in the learning.     Ultimately, we hope to weave a comprehensive story around the factors related to retention and design interventions that keep students in school.

At this date, retention analyses are underway amongst

  • Cohorts, first time, degree seeking, full and part time students
  • Students taking Gateway classes

The factors that may influence retention amongst these student groups (and the focus of the analyses) includes:

Gender Number of credit hours Disability Status
Age Veteran Status Declared Major
Academic Year Income level
GPA First in family to go to college

There are challenges to the analyses; every effort will be made to find acceptable solutions:

  • There are multiple data bases.  It is difficult to merge information across data bases.
  • Some information has never been collected.
  • The level of detail associated with specific variables may be missing.
  • Some information may be available on a group level only and not on an individual student level.
Retention Percentages by MajorBase:  Fall to Fall semester cohorts of first time, degree-seeking students
2005-06 to 2007-08
major return total retention rate
Boat Mfg 0 0 n/a
Biotechnology 1 1 100.0%
Radiography 8 8 100.0%
Esthetics 14 16 87.5%
Pract Nursing 6 7 85.7%
Med Asst 3 4 75.0%
BLET 20 27 74.1%
Ther Massage 14 19 73.7%
HRM 11 15 73.3%
Aquaculture 8 11 72.7%
Paralegal 12 17 70.6%
Respiratory 9 13 69.2%
ADN 7 11 63.6%
OST 4 7 57.1%
Marine Propulsion 8 14 57.1%
Criminal Justice 15 28 53.6%
Assoc. in Arts 118 225 52.4%
Horticulture 4 8 50.0%
Medical Office 1 2 50.0%
Interior Design 7 14 50.0%
Photography 14 28 50.0%
Culinary 17 34 50.0%
Cosmetology 13 27 48.1%
CIT 11 23 47.8%
AFA 8 17 47.1%
Assoc. in Science 39 86 45.3%
GOT 72 163 44.2%
Bus Adm 23 57 40.4%
EMS 5 14 35.7%
Early Childhood 13 38 34.2%
Web Tech 3 10 30.0%
Sum/Average 488 944 59.6%

Retention rates are typically displayed by major, i.e., the total number of returning students in a program is divided by the total number of students enrolled in that program.

This is an alternative way of displaying the same data.  Instead of calculating retention based on the number enrolled in a major, the percentages are based on sum of those returning and the sum of the total number enrolled across major.

There are two advantages to reviewing the data in this format:

  • It provides a real perspective on the data in the context of the total student body
  • It allows the creation of an index by major.  The index of 100 is expected if, in this case, retention rates observed by major are much greater – or much less – than expected.

The power point presentation that follows this section goes into greater detail on the value of the index.

Retention Percentages by MajorBase:  Fall to Fall semester cohorts of first time, degree-seeking students
2005-06 to 2007-08
major return total Index: Retention/Total
Boat Mfg 0 0.0% 0 0.0%
Biotechnology 1 0.2% 1 0.1% 195
Radiography 8 1.6% 8 0.8% 195
Esthetics 14 2.9% 16 1.7% 171
Pract Nursing 6 1.2% 7 0.7% 167
Med Asst 3 0.6% 4 0.4% 146
BLET 20 4.1% 27 2.8% 145
Ther Massage 14 2.9% 19 2.0% 144
HRM 11 2.2% 15 1.6% 143
Aquaculture 8 1.6% 11 1.1% 142
Paralegal 12 2.4% 17 1.8% 138
Respiratory 9 1.8% 13 1.4% 135
ADN 7 1.4% 11 1.1% 124
OST 4 0.8% 7 0.7% 112
Marine Propulsion 8 1.6% 14 1.5% 112
Criminal Justice 15 3.1% 28 2.9% 105
Assoc. in Arts 118 24.0% 225 23.5% 102
Interior Design 7 1.4% 14 1.5% 98
Photography 14 2.9% 28 2.9% 98
Culinary 17 3.5% 34 3.5% 98
Horticulture 4 0.8% 8 0.8% 98
Medical Office 1 0.2% 2 0.2% 98
Cosmetology 13 2.6% 27 2.8% 94
CIT 11 2.2% 23 2.4% 93
AFA 8 1.6% 17 1.8% 92
Assoc. in Science 39 7.9% 86 9.0% 89
GOT 72 14.7% 163 17.0% 86
Bus Adm 23 4.7% 57 5.9% 79
EMS 5 1.0% 14 1.5% 70
Early Childhood 13 2.6% 38 4.0% 67
Web Tech 3 0.6% 10 1.0% 59
Sum/Average 488 59.6% 944 3.2%

Posted by Cindy Schersching, PhD  July 19, 2009

February 1, 2007

Retention Issues Addressed

Filed under: Retention Issues — Don Staub @ 3:30 pm

Click this link to listen to some interesting and innovative retention stratagies being explored by some NC Colleges.

NPR Radio Link

Create a free website or blog at