"

10 Student Learning Outcomes Assessment

Key Topics

  • Student learning outcomes assessment as a process
  • History of student learning outcomes assessment
  • Unique characteristics of student learning outcomes assessment
  • Organizing student learning outcomes assessment in non-academic programs
  • The student learning outcomes plan
  • Methods for data collection
  • Observations from the field

Introduction

Aside from program review and accreditation, much of the ongoing assessment activity in colleges and universities falls under the umbrella of assessment of student learning outcomes (SLO). SLO assessment follows many of the same principles and methods discussed in the previous chapter with two major differences: it is intended to be routine and ongoing and it is concerned exclusively with student learning from academic, student affairs, student success, and co-curricular programs. Guided by the expectations of regional accreditors, SLO assessment has, over the last 30 years, developed its own language, expectations, and methods promoted and advanced by regional accreditors and dozens of dedicated “how to” books.

As an accreditor-required organizational activity, SLO is a sufficiently unique and important form of evaluation to warrant a chapter of its own. There are numerous books written about conducting student learning outcomes assessment for academic programs (e.g., Banta et al., 1996; Suskie, 2018; Walvoord, 2010), some of which give a nod to SLO in student affairs and co-curricular programs. A few authors address assessment in student affairs more broadly with a heavy focus on assessment of student learning outcomes (e.g., Bresciani, et al., 2004; Henning and Roberts 2024; Schuh et al., 2016). This chapter does not attempt to replicate those books. Rather, it seeks to provide an introduction to, and overview of, some of the unique features of SLO assessment and the particular organizational considerations pertaining to student learning outcomes assessment, especially in student affairs, student success, and co-curricular units. Readers are directed to Linda Suskie’s 2018, Assessing Student Learning: A Common Sense Guide, 3rd edition, a very practical approach to assessing learning outcomes, and to Henning and Robert’s (2024) Student Affairs Assessment or Schuh et al., (2016) Assessment in Student Affairs for more in-depth guides for how to do assessment in student affairs and related units.

This chapter focuses on SLO assessment in student affairs, co-curricular, and student success programs because assessment is harder to do in these settings and because there are fewer guides for how to do it. Although there is one right way to compute a particular statistic,  there is, unfortunately no one right way to do student learning outcomes assessment. Each text and author has a slightly different approach to the process. Be aware that assessment experts on your campus may talk about assessment differently than I do in this text or that Suskie (2018) or Henning and Roberts (2024)  do in their books.

The purpose of this chapter is to provide an introduction to SLO assessment as an organizational process, especially in student affairs, student success, and co-curricular programs.  Although I agree with Suskie (2018) that SLO assessment has become more institutionalized over time, this chapter is written from the perspective that institutional student learning outcomes assessment efforts are shaped and driven by expectations of regional and specialized accrediting bodies. As a result, SLO assessment is not simply an internally directed activity but rather one that conforms to accreditor expectations.

Brief History

Since the origins of higher education in the Middle Ages, college professors have used various tools to assess individual student learning in college, particularly in courses (Lucas, 2006; Thelin, 2004). Course grades have been used as an indicator of individual learning for years. However, beginning in the mid-1980’s, grades (and grade point average) were called into question as a measure of whether academic programs are meeting their intended learning outcomes has been called into question. SLO assessment emerged as a specific form of evaluation in the mid-1980s in response to national study group reports such as “A Nation at Risk” (K-12) and “Involvement in Learning” (Study Group, 1984). These and subsequent reports were critical of education and urged reform. One of the recommended reforms was increased use of assessment of student learning as a basis for improvement (Banta et al., 1996).

The press for assessment turned the conversation about educational quality on its head and away from a focus on input variables, such as faculty credentials, and teaching loads, to a focus on what students collectively learn at the program level (Bresciani et al., 2009). This shift in focus was given added life when Robert Barr and John Tagg (1995), published what became a seminal article entitled “From Teaching to Learning: A New Paradigm for Undergraduate Education.” Barr and Tagg (1995) argued that if learning is the end goal of higher education, administrators and faculty members must have better ways of knowing what outcomes are being achieved than grades. Input measures such as quality of faculty, number of books in the library, or amount of research dollars secured, and status measures, such as institutional selectivity and prestige, should not suffice as the main measures of learning and of college effectiveness. Colleges and universities have since been expected to demonstrate what students learn to inform improvements in teaching and learning at the program level; student learning outcomes assessment is the means by which this is achieved. These efforts have been led by the regional accrediting associations.

Widespread calls to calls to make assessment evidence public in the name of accountability and to aid student choice emerged at the turn of the 21st century. In the early 2000’s, fearing that the federal government might mandate specific forms of assessment as states were doing in K-12 schools, the Association of Public and Land Grant Universities implemented what it called the Voluntary System of Accountability with the goal of standardizing assessment practice and making results public. It has since transformed into a data system that can be used by members to compare themselves to others on key performance metrics. In another attempt to make learning outcomes public, the Obama administration proposed a federal college rating system based on broad outcome measures such as employment rates. The College Score Card launched by the federal government in 2015 publishes data on various college outcomes and allows parents, students, and citizens to compare colleges to a national average and to each other. These efforts have been only moderately successful.

Henning and Roberts (2024) trace the origins of outcomes assessment in student affairs back to the foundational documents of the profession from the 1930s and thus see outcomes assessment as fundamental to the profession. Documents such as the 1994 The Student Learning Imperative (ACPA), reinforced the role of student affairs as partners in the educational mission. Regional accreditation bodies have been key players in the SLO assessment movement by requiring that both academic and student support areas engage in ongoing assessment of student learning outcomes. In fact, accrediting bodies have largely defined, shaped, and promoted SLO assessment as it is implemented today.

Despite its pervasiveness, student learning outcomes assessment has only slowly become institutionalized. As a process, SLO assessment has no shortage of critics, some of whom are discussed in Chapter 11. Suffice it to say, colleges and universities have not thoroughly or easily embraced the outcomes assessment movement. SLO assessment remains one of the areas on which institutions are likely to get dinged in reaccreditation reviews (Higher Learning Commission, 2024; Suskie, 2015).

From my experience as a peer evaluator for the Higher Learning Commission, student affairs units have generally been slower to develop and implement meaningful learning outcomes assessment that goes beyond measures of participation and satisfaction and other measures that Suskie (2018) refers to as measures of student success. At the same time, because the models for doing SLO assessment come from academic programs, student affairs and student success programs often try very hard to fit what they do into an academic SLO assessment framework when that model might not be best suited to the mission of all student affairs, student success, and co-curricular programs. There are many legitimate reasons discussed below for why doing effective learning outcomes assessment in student affairs, co-curricular, and student success programs is particularly challenging.

Critiques and challenges aside, student learning outcomes assessment is here to stay. College accreditors, administrators, governing boards, parents, and students want to know whether students are learning what they should be learning from academic as well as co-curricular and extra-curricular programs.

Student Learning Outcomes Assessment: Definition

Huba and Freed (1999) provide one of the most complete definitions of student learning outcomes assessment that is still appropriate today. They define SLO assessment as:

The process of gathering and discussing information from multiple and diverse sources in order to develop a deep understanding of what students know, understand, and can do with their knowledge as a result of their educational experience; the process culminates when assessment results are used to improve student learning. (p. 684)

Although assessment of learning occurs at the individual course or activity level, as conceived by most accreditors, SLO assessment involves aggregating data on learning from many students across multiple sections of a general education program, courses in a major, or from multiple activities in a student affairs or co-curricular unit to draw conclusions about the extent to which students who use a unit’s programs or complete a degree program are achieving the program’s stated learning outcomes. Student learning outcomes assessment as conceptualized in this chapter and in and most student learning outcomes books, is concerned less with individual performance in one course or from one activity or event or smaller program than it is about collective performance of cohorts of students in an academic major, student affairs, student success, or co-curricular unit. That said, SLO assessment frequently builds on and uses assessment data from individual courses, activities or smaller programs to make claims about student learning for a large important program, academic major or functional area. The results of SLO assessment are then used to inform program changes that lead to improved learning. For this reason, SLO assessment is decidedly formative in purpose.

The Process

Whether for academic, co-curricular, student affairs, or student success programs, assessing SLO’s follows several commonly agreed on steps. These include: 1) Identifying learning goals or outcomes, 2) identifying or creating learning opportunities where outcomes are introduced and practiced, 3) assessing outcomes, and 4) using the results to inform changes. You will encounter some authors and colleagues who use terms goals and outcomes interchangeably (e.g., Suskie, 2018) while others make distinctions between the two. For those who take the latter approach, goals are usually broader statements of intentions—to graduate history majors who are knowledgeable and critical thinkers—whereas outcomes are much more specific statements of what history majors would know, think, and be able to do.

Unique Aspects of Assessing SLO’s?

SLO assessment and more general outcomes assessment as described in Chapter 9 have many things in common. They employ the same logic and can use many of the same research methods as traditional outcomes assessment not involving student learning specifically, but there are some subtle but important differences.

  • SLO assessment exclusively refers to student learning in academic, co-curricular, student success, and student affairs programs. SLO assessment is not concerned with programs and outcomes involving faculty and staff or larger social conditions such as campus climate, campus safety. (The principles and methods may be the same, however.) Technically, SLO assessment does not include assessment of many of what Suskie labels student success measures. I will talk more about why this distinction is important later in this chapter.
  • Assessment of student learning outcomes assumes that an academic or student support program is ongoing and so is assessment of them ongoing; the same is not necessarily true for other types of assessment discussed in the previous chapter..
  • The purpose of SLO assessment is almost always formative and ongoing whereas outcomes assessment described earlier can be summative and include “one-off” evaluations.
  • Assessment of student learning outcomes for accreditation purposes is typically not applied to a single course, program activity, or to a one-time event. Rather, SLO assessment typically refers to assessment that occurs at unit or major program level (history majors but not to a history course, study abroad but not a specific study abroad program to a specific country, for example, even though program administrators will want to collect outcome data each specific program).  Although the director of student activities may choose to assess the outcomes of individual activities such as movie night or a speaker series, the assessment plan for the office of student activities would likely have more general learning outcomes that involve collecting data from across the department’s many and various programs.

SLO Assessment in Student Affairs, Student Success and Co-curricular Programs

Most of the early focus of SLO assessment focused on assessment in academic and general education programs. As SLO assessment has matured and changed over the last thirty plus years, student affairs, success, and co-curricular programs have found themselves front and center in trying to establish effective SLO efforts. These efforts face a number of challenges posed by trying to adapt an evaluation process designed for academic programs to the wide variety of programs and services provided by student affairs, success, and co-curricular programs.

Key Considerations

SLO assessment in student affairs, student success, and co-curricular units involves at least two critical considerations. One of those is the type of outcomes student affairs, student success, and co-curricular programs seek to produce and assess. The second involves the generally short-term and voluntary nature of participation.

Outcomes

In the third edition of her popular book, Assessing Student Learning, Linda Suskie (2018) makes a couple of useful distinctions that frame the ensuing discussion. First, she distinguishes between two types of outcomes characteristic of student affairs, student success, and co-curricular programs: student learning outcomes and student success outcomes. Student learning outcomes encompass broad categories of learning: knowledge and understanding, thinking skills, habits of mind, attitudes and values, and professional skills (Suskie, 2018). These are the more traditional learning outcomes one might expect of an academic program. These learning outcomes are also the object of many co-curricular, student affairs, and student success programs as well. However, where they focus on learning outcomes, these programs are well suited to developing certain types of learning outcomes:  interpersonal skills (team work, collaboration, leadership), analysis and problem-solving skills, and application of formal learning to situations outside of the classroom (Suskie, 2018, p. 57).

The second type of outcome from student affairs, student success, and co-curricular programs is what Suskie calls simply “student success.”  Student success outcomes are indicators of student success in reaching their goals—learning where to get help and support, progressing to a degree, earning good grades, choosing the right courses, and getting a job (Suskie, 2018, p. 110). Colleges and universities have many units and activities whose main goal  is to help students reach their educational goals and to succeed in college. As Suskie argues (2018) these other “co-curricular experiences, including programs such as orientation and first-year experiences, are explicitly intended to help students succeed in college: to earn passing grades, to progress on schedule, and to graduate” (p. 110). Although some units focus primarily on learning as defined above and others primarily on student success, some student affairs, student success, and co-curricular programs are geared to both student learning and success (e.g., how to write a personal financial plan). 

Outcomes: A Caveat

Colleges often determine their success by measuring outcomes such as retention and graduation rates. These two outcomes are typically used at the institutional level as measures of student success as defined by Suskie (2018) and as measures of effectiveness. It is, however, difficult, if not impossible to make a case that any single program leads to retention and/or graduation rates. There are simply too many other factors that can contribute to institutional outcomes such as retention and graduation rates. These are the kinds of global measures of institutional effectiveness upper-level administrators worry about but are not typically the focus of unit level outcomes assessment for improvement purposes.             

With broader, institutional level data, it is possible to make such claims about the outcomes of groups of activities. For example, using data from NSSE (National Survey of Student Engagement) or CCSSE (Community College Survey of Student Engagement) over time and linking data to retention rates, it is possible for student affairs or undergraduate studies units to make arguments about the relationship between a collection of activities (known as high impact practices) and important outcomes. Carefully designed, statistically sophisticated studies can link living on campus or taking an orientation course with retention rates, but these are not typically the focus of unit assessment plans.

Programs Are Short and Voluntary

A second important distinction is that student affairs, student success, and co-curricular programs tend to be short-term and  voluntary. Obviously, pursuing a major/degree is a long-term, intensive, experience for which requirements are typically spelled out. Many student affairs, student success, and co-curricular programs are shorter term for which participation may be voluntary. For programs and learning experiences fitting this description, the number of outcomes one can reasonably expect to achieve in short programs should be few and assessments should be brief and fun (Suskie, 2018).  As an example, the Director of Study Abroad at my university and I just completed an evaluation for a 10 day study abroad program for which the sponsors had way too many ambitious outcomes for what such a short program can reasonably achieve. Setting many high expectations dooms sponsors of such a program to be disappointed. There is no way one can see much change in 10 days, and when there is little or no change, program sponsors are tempted to say the program didn’t have any effects, which is likely not accurate conclusion.

Why These Distinctions Matter?

The above distinctions and the decisions they provoke in establishing an assessment program matter. They matter because too many students affairs and student success professionals twist themselves into knots trying to fit what they do into a traditional student learning outcomes assessment framework even though what they do best may not fit what it generally means to “do student learning outcomes assessment.” By SLO assessment, I mean SLO assessment as designed for academic programs. As a consequence units identify learning outcomes that may not be meaningful or significant, that are not based on a carefully planned curriculum, or they focus on SLO assessments for the small groups of students who work in their offices, for example peer tutors, and may downplay the important success outcomes their offices reasonably achieves.

A recent personal experience illustrates the challenge. My department has a new fully online EdD program. The partners we work with planned an asynchronous online orientation about all things Jayhawk (mostly geared toward undergraduates). The orientation focuses on things like familiarizing students with the learning management system, how to enroll in courses, where to get help, and Jayhawk traditions—classic success goals in Suskie’s framework. And yet, the orientation program is based around learning outcomes and tests to assess learning. When I protested the appropriateness of this for doctoral students, the response was “We were told we had to have student learning outcomes that were tied to institutional learning goals and that we had to assess them to get the program approved.” 

When thinking about which non-academic programs should engage in student learning outcomes assessment, Suskie (2018) argues that SLO assessment efforts should be focused on “co-curricular experiences where significant, meaningful learning is expected” (p. 110) and for which there are intentionally and systematically planned learning experiences. The orientation program described better fit her definition of being concerned with student success. “Significant, meaningful learning” may be a small part of what some student affairs and success units do, as in the case of the orientation described above, and may be better served by defining and assessing student success metrics.

This not mean that student success outcomes are unimportant or that their outcomes should not be assessed as they are obviously necessary services to help students succeed. In fact, services to help students succeed have increased exponentially in recent years. It simply means recognizing what kinds of outcomes and assessment techniques are best suited to the mission and goals of a program or service. The implication for practice is that student affairs, student success, and co-curricular programs may concern themselves with producing and assessing two types of outcomes, one or the other or both: student learning outcomes and also student success measures. The outcomes will take a somewhat different form and the data gathered may differ. I argue that making the distinction will ultimately lead to better and more useful data.

Organizational Decisions for Structuring Assessment

Outcomes can be written, and assessment conducted, at the course or event, program, and functional area levels. For academic programs leading to certificates or degrees, the focus of most accreditors is on the academic major or what assessment experts call program-level assessment. “Program-level” assessment is not so straightforward in student affairs and student success units. Although it is clearer in traditional co-curricular units such as study abroad, student affairs and student success units must determine how assessment will be organized for institutional reporting purposes and whether the focus is at individual event/program, at the major program, or at functional area level.

A related decision involves determining where “significant, meaningful learning is expected” that is supported by thoughtfully designed curricula (Suskie, 2018, p. 111) and where the focus is on providing services that will help students succeed. The former would involve traditional expectations for student learning outcomes assessment while student success-focused programs would rely on other types of assessment measures to capture student indicators of success. Different kinds of outcomes and assessments will be appropriate where significant, meaningful learning based on well-designed curricular experiences, is expected, taught, and can be documented than are appropriate when student success outcomes are the goal. Making a distinction will enhance the usefulness of assessment results.

These issues need to be resolved while considering a college or university’s expectation for “doing student learning outcomes assessment.” Typically for student affairs, co-curricular, and student success units, program-level assessment efforts are focused at the functional area level: For example, residential life, advising, undergraduate research, office of minority affairs, or co-curricular program level such as study abroad (Schuh et. al., 2016). A college or university may have large signature programs that might also be the focus of student learning outcomes assessment apart from the functional area to which they belong.

Assessment in Academic Units

Student learning outcomes assessment  in most colleges and universities is organized around certificates, majors or degree programs, or specific requirements such as an institution’s general education requirement. As a result, SLO assessment in academic programs typically occurs at the level of the academic major. Each academic department is expected to have written learning outcomes for each major or degree program (B.A. in biology, B.A. in art history, etc.) and a plan for how and when assessment will occur. The question asked is to what extent are history majors, for example, meeting the learning outcomes expected of history majors. The exception to this is for institution-wide programs such as general education, or a college-wide academic major such as teacher education.

Although data may be collected from courses, the focus of assessment—the unit of analysis—for SLO assessment  is straightforward: it is the academic certificate/major/degree or a general education program. It should go without saying that there needs to be a link between course goals and outcomes and the goals and outcomes of the academic major or general education program. Data are collected from the program’s, college’s, or institution’s courses or other data collection means as appropriate, and aggregated across multiple courses and semesters to make statements about what students in a major or general education program know and can do.

Student Affairs Assessment: Activity, Program, Unit, or Functional Area?

Determining the level at which assessment occurs is more complicated in student affairs, student success, and co-curricular units. Depending on the campus, each student affairs or other student affairs division likely has many functional areas and smaller departments or units under its purview. Likewise, each of these departments may house many smaller programs, each with specific goals and outcomes and a set of planned activities, some of which may be one-time offerings while others are ongoing.

 Learning and/or success outcomes should be identified and assessed for all levels of the student experience—one-time events, programs, and at the functional area level (Schuh et al., 2016). The questions and purpose differ by level. The question at the event level is “What do I want them [students] to know” (Schuh et al., 2016, p. 83) or be able to do as a result of participating in an event, activity or course. This is a question for the instructor or the event organizer and focuses on immediate outcomes from a single event or program.

For the purposes of SLO assessment, the focus is typically at the unit level, which Schuh et al., equate with a functional area: residential life, advising, undergraduate research, office of minority affairs, etc. The question here is what do students know and what can they do as a result of the unit’s activities. Operationally, this means focusing on outcomes from living on campus rather than simply on outcomes from an alcohol education program conducted in the residence halls. The alcohol education program may be one site of data collection. Individuals working in student affairs and other success programs may want outcome data at the activity or individual program level for their own purposes and thus may have assessment plans for and do evaluations of individual events and activities. As indicated earlier, accreditors typically expect a functional area/unit assessment plan and not one focused on many small individual programs, activities or events. However, it is up to the institution to define an assessment organizational plan that makes sense, can be implemented, and can be used to improve student learning.

A functional area’s assessment plan typically builds on assessment of single activities and programs where significant and meaningful learning experiences are in place, and uses data from them. Examples of outcomes from major programs are often used as examples in external reports. The Higher Learning Commission peer reviewers with whom I have worked use the term “roll up” to describe the process of combining assessment data from lower level activities to make claims about the outcomes for the larger, functional area or academic major. The Office of  Study Abroad may assess outcomes for each program it offers, but it will likely use data drawn from those individual programs, and “roll up” data from them, to make claims about outcomes for the study abroad office/unit. Likewise, the Office of First Year Experience may have separate assessment outcomes and data collection for the major programs in the unit: Common book, new student orientation, maybe first year seminars to name a few.  Each program will collect data that will be “rolled up” or used to make the case for how Office of First Year Experience is meeting its outcomes.

There is no hard and fast rule for what colleges and universities determine to be the appropriate level for their outcome assessment plans in non-academic units. Bresciani, et al., (2009) argue that no division should have too many assessment plans. Neither should the plans be overly complicated. Having too many assessment plans becomes too confusing to manage and may result in a lot of meaningless, redundant data collection. Or, it could result in assessment paralysis with nothing being assessed. The plan needs to make sense for a particular institution, division, or office and, I would argue, it needs to be manageable and produce useful information.

Assessment in Co-curricular Programs

Although there is no formal definition of or approval process  to determine what is a co-curricular program and what is not, it is generally agreed that co-curricular programs are closely tied to academic goals and to the formal academic mission of a college or university: study abroad, honors programs, undergraduate research, and service learning are four such programs. (See Mintz and Rutter, 2016 for a more in-depth discussion of co-curriculum.) Such experiences are often tied to formal academic courses. As such, it is easy to imagine assessment for co-curricular programs being organized around the office sponsoring the programs: study abroad, undergraduate research and focusing on significant student learning outcomes. Whether traditional student affairs programs and units are considered co-curricular likely depends on the purpose of the program, but also on the institutional definition of co-curricular. For SLO assessment, the definition of the institution’s accreditor may matter. The Higher Learning Commission, for example, expects SLO assessment plans and reports for co-curricular programs but not necessarily from all student affairs programs. Other accrediting bodies may have different expectations.

Student Learning Outcomes or Student Success Measures?

As indicated above, one of the organizational decisions facing student affairs, student success, and co-curricular SLO assessment is whether their program focuses on learning or student success measures as described above.  Institutional accrediting bodies do not expect every student affairs or success program to engage in formal student learning outcomes assessment. (They do expect assessment efforts.) Suskie (2018) is quite blunt: She says, “Some programs under a student affairs, student development, or student services umbrella are not co-curricular learning experiences (Italics in original, p. 111). Administrators are busy and effective assessment takes time and should produce useful information. All the more important for unit assessment efforts to focus on the most important and meaningful outcomes.

Having taught a course on assessment and evaluation for a long time, I can well imagine that you are in a state of panic wondering how you make a determination. You are asking yourself: How do I know which outcomes my program should reasonably assess? Which are student learning outcomes? Which are student success measures?

  • Ask yourself: Does my unit/program expect and provide a meaningful, significant, thoughtfully designed learning experience geared to producing significant learning?  Or, does it primarily seek to help students succeed in reaching their goals. If the latter, you are not off the assessment hook. You still need to engage in assessment. The assessment will just ask somewhat different questions and use different measures.
  • Ask yourself where your unit/program delivers a planned, meaningful curricular experience and where you can expect SLO and where student success measures are appropriate.
  • Recognize that some programs focus on student success, but not student learning. They should be proud of the work they do to help students achieve success.
  • What does your institution expect of student affairs, student success, and co-curricular units assessment?

Neither Linda Suskie nor I can tell you with 100% certainty whether your program has significant, meaningful learning outcomes and should use good student learning outcomes assessment practices or whether the unit is primarily concerned with student success outcomes—or does both. This is where some of the other distinctions come into play. Does the program have a significant, carefully planned curriculum, is it short or long term, and is participation voluntary?  Asking these questions is preferable than beginning with the assumption that every unit must do traditional student learning outcomes and then trying to fit what they do into a SLO assessment framework.

SLO Assessment Plans

Once decisions are made for how student learning outcomes assessment will be organized in student affairs, student success, and co-curricular programs, assessment must be planned and then carried out. The following discussion of the steps in conducting SLO assessment is structured around the components of an assessment plan. Accrediting bodies typically look for assessment plans for academic as well as student affairs and co-curricular units. The focus of this section is directed at student affairs, student success, and co-curricular units.

Most of the regional and specialized accrediting bodies require academic programs and cocurricular units to have written assessment plans that represent the critical steps in student learning outcomes assessment. I would argue that a unit’s plan should also identify student success outcomes and how they will be assessed. A unit assessment plan is essential because SLO assessment is designed to be a continuing activity encompassing entire functional areas and extending often over multi-year cycles for which the plan is a guide. Having decided the level at which assessment will occur, and where SLO will occur and where student success measures are better metrics for assessing outcomes, the plan shows what will be assessed when and how, and how and where data will be collected and  used. An additional benefit for having an assessment plan is that it can help the unit focus its limited time and resources on SLO or student success assessment on its most significant learning and student success metrics. I should add, that the division will also likely have a master assessment plan that represents each functional area’s assessment plans and how the results from functional areas will be “rolled up” to make claims about divisional outcomes.

To illustrate, I have chosen a student affairs program—a fictional case of assessment in the Office of Money Management.

An Example: Office of Money Management

A division of student affairs has broad student learning goals or outcomes, and the Office of Money Management (OMM) is developing its own learning outcomes. The main focus of its work is to provide a three-part workshop to with the outcomes identified below. The workshop has been offered for multiple years and has a fairly well-designed, substantial curriculum that has been modified as a result of feedback.

OMM should begin by identifying what knowledge and skills students who participate in OMM programs should gain from attending its workshops. The outcomes should follow from the office’s description and mission, which presumably align with the mission of the division of student affairs or the unit of which it is a part. The outcomes should be focused on what the program does best. To use an extreme example, it is unreasonable to think that OMM’s mission would include helping students make good course choices or to lose weight. Rather, OMM should identify student learning outcomes related to what the office can reasonably influence and be held responsible for. The first step, then, is to articulate learning outcomes. What is it that students should know and be able to do as a result of participating in OMM programs?

Taking this into consideration, let’s imagine that OMM identifies the following broad goals:

  • To prepare students to analyze their finances,
  • Students will identify the keys to making sound financial decisions

Specifically, as a result of participating in OMM programs, (Using the CampusLabs ABCD model):

  1. EIghty-five percent (degree) of the students (audience) who attend a three-part money management workshop (condition), will identify principles of sound financial management (behavior) as indicated on a post-meeting questionnaire (how demonstrated).
  2. Ninety percent of student participants will  identify and describe the components of a plan for managing their money as evidenced in their financial plan.
  3. 75% of participants of students who participate can apply these principles to analyze their own financial status at an emerging level.

    These learning outcome statements tell the reader what students should gain or be able to do (behavior): identify, apply, demonstrate. Outcomes #1 and #2 also indicate how many should achieve these outcomes (degree), and how students will demonstrate this knowledge or behavior (how demonstrated).

As an additional part of its assessment plan, OMM would likely want to assess student success measures: Number of participants, perhaps demographics of participants. Other possibilities include default rates or amount of debt.

Note: In the OMM example, “principles of good financial management” would likely have to be operationalized and defined more specifically in the data collection process.  As discussed in Chapter 5, the OMM would have to identify specific indicators of “principles” so that they can measure them.

Components of the Plan

Assessment plans typically include:

  • A unit’s mission and goals (usually at the academic major or specific office level) and sometimes as the unit’s goals are tied to those of the umbrella division of which the unit is a part.
  • A unit’s specific learning outcomes (knowledge, skills, attitudes, values, behaviors); (If the plan is for an individual event or activity, its outcomes should be linked to the functional area’s outcomes.).
    • If a unit also includes student success metrics in addition to or instead of traditional student learning outcomes, these should be stated.
  • A table or map showing where these outcomes are introduced, developed, and/or practiced in a unit’s programs.
  • Sources and methods of data collection as well as a timeline for data collection for each outcome.
  • A timeline of which outcomes will be reviewed when and how often. Not all units collect data on every outcome each year.
  • Discussion of how results will be or are being used to inform changes, or to close the loop.

     Unit assessment plans should be written to identify and assess outcomes for a student affairs functional area, co-curricular program, or for an academic major or degree—whatever level an institution has determined to be important.

The following discussion focuses in more depth on each of the plan components for SLO and assumes student learning outcomes and student success metric assessment is done at the level of the functional area (e.g., advising, student housing, student wellness services, study abroad.) Please note that this sort of plan is also required for student learning outcomes for academic programs.

Division Goals and Mission

An individual department’s student learning outcomes should be tied to the goals of the division or larger unit of which it is a part, and also to those of the broader college or university goals. Residential life outcomes are specifically derived from what students should learn from their housing experience. However, the residential learning objectives should be connected to the more global goals of the larger unit of which the student housing department is a part and to those of the larger institution. Some accrediting bodies expect that colleges and universities have university-wide learning outcomes or goals to which academic, student affairs, success, and co-curricular learning goals and outcomes are linked.

Learning outcomes need to be appropriate for the organizational level to which they pertain. They are typically broader goal statements at the institution level; a bit more focused at the division or functional area level, but still broad; and more specific at the individual program level. A review of student affairs’ assessment websites suggests that units identify divisional outcomes around broad domains of learning for the various programs their respective division provides. These broad domains include broad areas such as personal development, peer engagement, lifelong learning, personal development, and social responsibility—all very broad and hard to assess goals. Divisional goals and outcomes often draw on the Council for Advancement of Standards (CAS) learning outcomes (https://www.cas.edu/student-learning–development-outcomes.html). At the divisional level, reports from the division’s various units will likely be used to determine whether the division is meeting its outcomes. Theoretically, these goals will drive the mission, goals, learning outcomes, and importantly, curricula or activities of functional units and programs that make up the division.

Program Learning Outcomes: Functional Area

Following a statement of the unit’s mission or broad goals, the next component of an assessment plan is a description of the unit/program and statement of the unit’s learning outcomes as well as their student success outcomes.  (This discussion is focused at the functional area level. It could apply at the major program level as well.) What is it that student participants are expected to know or be able to do as a result of participating in your unit’s program(s)? Learning outcomes may exist or they may have to be developed or refined. If developing a plan for assessing student success in units where a purpose is to provide services that support students to achieve their goals but don’t meet the definition of significant learning outcomes.

As noted in the OMM example above, several specific outcomes have been identified. You are in luck. You can jump to the mapping step. However, if outcomes do not exist, writing unit level student learning outcomes is the first order of business. As Bresciani et al. (2009) note, learning outcomes must be “measurable, meaningful and manageable” (p. 38). The CampusLabs ABCD model proposes that outcomes statements should make clear the outcome expected from participation in the program or learning situation. Good outcome statements make clear how the outcome will be demonstrated (CampusLabs, n.d.). Active verbs such as articulate, describe, analyze, synthesize, critique are typical verbs that you might use in good outcome statements. Bloom’s and other taxonomies of educational outcomes provide a good source of these verbs for outcome statements (See Henning & Roberts, 2024; Suskie, 2018). The verbs used can suggest ways that outcomes are demonstrated.  Although Bloom’s, or similar taxonomies, are most often applied in classroom contexts, it is not too hard to imagine, with a little creativity, how some of these verbs can be applied to student affairs and co-curricular programs, especially for those that have a planned, sequential curriculum.

Because student affairs, success, and co-curricular programs are often short-term, and participation is voluntary, as compared to an academic major, they should have a limited number of learning outcomes, two or three at max (Suskie, 2018). It is difficult to achieve too many learning outcomes in a short time frame, especially when participation is voluntary. This is simple common sense. Doing student learning outcomes well requires significant time, expertise, and resources. As far as the outcomes themselves, she argues that student affairs and co-curricular programs typically focus on general outcomes such as interpersonal skills (leadership, collaboration), analysis and problem solving, and applying concepts learned in experiences to “the real world” (Suskie, 2018, p. 57). In contrast, student success measures could include metrics such as knowledge of which courses to take, where to find support, and  how to navigate the library.

Mapping Outcomes onto Activities

The outcome activity map (or curriculum map for academic programs) is a critical step in planning a unit’s  assessment efforts. This step calls on individuals doing assessment to map the outcomes onto activities or services provided by the office or onto specific course(s) or activities that exist or need to be created. The purpose of this activity is to ensure that students are introduced to, learn, and have a chance to practice the outcomes in question. Maps often also distinguish levels of outcome to be achieved and at what level. To use the OMM example, the OMM staff must identify where and how in their services students learn the principles of good financial management. Where do they apply the principles to analyze their own financial status? Where do they learn how to develop a plan and asked to do so? A review of the logic model for OMM (if one existed) would presumably identify where or how students are supposed to acquire this knowledge or learn the desired skills. If a logic model does not exist, the mapping exercise helps to develop one. In the OMM example, students will be introduced to expected knowledge and skills in one or more of the workshops and may identify the level of outcome attainment expected (e.g., beginning, intermediate, advanced).

The mapping exercise is one of the simplest, and yet potentially most beneficial, activities related to assessment of student learning outcomes for academic as well as student affairs, success, and co-curricular programs.The activity forces program administrators to ask themselves where and how they expect students to learn or practice specific attitudes, skills, or knowledge they claim for their students and at what level. Not surprisingly, the mapping activity often demonstrates that programs have outcomes that are not specifically introduced, developed, and practiced in any course or activity, forcing adjustments. In addition to ensuring that desired outcomes are introduced and practiced in the program, a mapping exercise also helps an office determine where it might collect data on the outcomes.

In this case, I have assumed that mapping indicated that outcomes for OMM are introduced and practiced in various workshops offered by OMM.

What Information to Collect, Where, and How

Once outcomes are determined and mapping is complete, the next step is to identify what information will be collected, where, when, and how. The mapping exercise helps you to identify places where collection of assessment data can occur. One of the keys to collecting useful assessment data is to be intentional and systematic in the processes used. Assuming students are supposed to develop skills for managing finances through workshops, the challenge then becomes one of figuring out in which workshops data will be collected and how. Methods of data collection use common social science research methods and some additional ones discussed in more depth below and in Chapters 15-18.

 In the OMM example, a variety of methods could be used: Students could complete a questionnaire that asks them to assess their own learning on some sort of inventory of what the office hopes they will learn through interaction with the office (an indirect measure). Alternatively, students could be asked to submit a financial plan or given a test of some sort (direct measures) that are assessed using a rubric. Because participation in OMM workshops is likely voluntary, assessments should be fun and engaging. Tests of knowledge are neither fun nor engaging!

Instead of offering distinct programs, the Office of Money Management might also integrate its program into a UNIV101 orientation course to reach many students. In this scenario, let’s assume OMM has a unit (two week) in UNIV101 dedicated to providing financial management content to students related to the outcomes. Instructors require assignments asking students to demonstrate these outcomes in writing. A rubric is developed to assess degree of attainment of the outcomes on the writing or discussion assignment. Data are recorded and analyzed for SLO assessment purposes.

  1. As a result of participating in the UNIV101 unit, students can identify principles of good financial management
  2. Students can apply these principles to analyze their own financial status.
  3. Students develop a plan for managing their money.

    Notice that the words change, increase, and decrease are nowhere to be found in the outcome statements. To answer “increase” or “decrease” questions, one would have to collect data on variables of interest at the beginning and end of the semester. It is always nice to have data from multiple points in time, when possible, but just be aware that incorporating the words increase, decrease, and growth into outcome statements requires such a method. Likewise, the above outcome statements do not claim behavioral outcomes beyond applying what students learn to an analysis of their financial status through a class assignment. They do not say, for example, that participating, students will not go into debt or that they will manage money responsibly. As much as one might want programming to affect behavior, behavioral change is often beyond the control of faculty and staff members. Even if it were within their control to do so, it would be very difficult to collect such data. Specific methodological approaches common to SLO assessment are discussed below. It is quite reasonable for “hard to assess” outcomes to invite use of indirect measures such as student satisfaction or self-reported learning.

You would also likely want to collect evidence of student success metrics: How many students participated, who participated, for example. How satisfied were they with the workshops.

Another option with respect to the OMM example is to position OMM as one program within a larger student affairs unit. Let’s assume that one of the SLO’s for the unit is that students learn to develop and apply essential life skills such as financial management, health and well-being. In this case, the larger unit’s assessment plan would include OMM as one place from which to collect data to determine the extent to which student participants are learning the intended knowledge and skills.

A Slightly Modified Approach

The approach described above assumes the unit identifies student learning or student success outcomes and methods and a timeline for gathering data for each outcome on some sort of schedule. I have recently seen some units take a slightly different approach. This approach involves identifying a particular question a unit has about its programs that may or may not be directly linked to their student learning or student success outcomes. Data are gathered, analyzed, and used to inform changes as appropriate. Using the OMM example, the office might have decided that for academic year 2021-22, it wanted to explore how Covid-19 affected students who participated in its workshops and courses and specifically how Covid-19 affected students’ financial plans. Presumably the office can link this to their overall goals and to ongoing SLO’s, but it is a specific project for a specific year.

This inquiry approach has some advantages, especially in student affairs, student success, and co-curricular units. It allows units to explore questions about student learning or success that are of particular interest and importance to them thus increasing the likelihood the findings will be useful. This approach makes a good deal of sense for programs in which participation may be variable. That is, students may participate in some activities but not the full range or over a sustained period of time. It also makes sense for units, like advising where service is tailored to individual needs. Even if units take this approach, it is important to ensure that each SLO is covered by an assessment project at some point in an assessment cycle whether that is every year or every three years or every five years.

Create a Timeline

Because SLO and student success assessment are meant to contribute to an ongoing, continuous improvement effort, it is useful to create a timeline for what will be assessed and when. It is not necessary that each outcome be assessed every year.

Plan for Implementing the Plan and Using the Data

An assessment plan typically includes a section detailing which outcomes (or what question) will be assessed in any single year and a section on how data will be reported and used to make changes to a program.  Accreditors do not require that a unit gather data on every single outcome every year although a college’s student affairs unit might do so. For example, a department may choose to focus on one learning outcome each year, collect data relative to that outcome, analyze it and make changes to its program or not. Or, as described below, units may take an annual research project approach related to one or more outcomes.

The expectation is that units are continually involved in collecting and analyzing data and making changes to improve the teaching and learning process based on what is learned. That said, because the goal of outcomes assessment is to make programs more effective in promoting student learning, growth, and development, one needs to be confident that change is warranted. It is important to avoid changing too much, too quickly, or too frequently. Programs need time to become established and to have an effect. Too many changes that are made too soon and too frequently can lead to programs with little stability. You want to be as sure as you can be that the issues that arise are “real” before you change. Collecting data over some extended period is one way to do this. The one exception is that if you see something problematic emerging from your data, you may want to engage in program modifications immediately.

Storing Data

An overlooked aspect of planning for assessment of student learning outcomes is how the data will be stored from semester to semester and year to year. This is not a trivial issue and should not be overlooked. Many institutions either develop in-house assessment management systems or they buy into one of the existing systems such as TracDat or Campus Labs/Baseline. An institution needs to be careful not to let the system determine how it does assessment. However, some sort of electronic management system is nearly essential. An example from my own academic department illustrates the challenge. The graduate teaching assistants (GTAs) in my department have taught a required undergraduate course in the teacher preparation program. Until recently, the course could also be used to satisfy a couple of the university’s general education goals. Each GTA was supposed to use a specific assignment that they collected and scored for assessment purposes. There was no central site for GTAs to deposit these assignments. When the GTAs graduated or stopped teaching, unless the supervisor was downloading these assignments every semester and putting them in a central place or someone had access to Canvas course sites, it was difficult to store data in usable form. Student affairs or other cocurricular programs likely do not use the campus learning management system making it particularly important for them to think about creating a place to maintain data .

Methods for Data Collection

Methods for collecting outcome data often rely on traditional social science methods described in Chapters 15-18, but may take into consideration specific methodological issues described below. Some of the methods and tools used in SLO assessments include tests, surveys, focus groups, institutional data, observation, and national surveys. In academic programs, faculty members may apply rubrics to analyzing student performance on designated assignments. There are several important matters to consider when choosing data collection methods and measures for SLO and student success assessment. Regardless of the data collection methods used, they should be developed or chosen intentionally and employed systematically. There is no one size fits all. I have seen units at some universities attempt to employ quasi-experimental designs while others collect data by applying a rubric to the assessment of capstone projects, theses, and dissertations. The rule of thumb is to employ methods generally accepted as good practice in the assessment community and to apply them competently and systematically. See Suskie (2018) for extensive recommendations for data collection. Finally, data collection should not be overly complicated and time consuming. The goal is to collect good, useful data. Complicated and difficult data collection work against such utility.

Direct and Indirect Measures of SLO

SLO assessment experts use the terms direct and indirect to categorize the types of measures and data collection instruments used to assess outcomes.

Direct Measures

Direct measures are typically naturally occurring opportunities that call on students to demonstrate the knowledge and skills they have or are supposed to have learned. These activities are naturally occurring because instructors or program leaders assign them to meet course or program requirements. For example, a written financial plan or a literature review for which students get a grade as part of the normal set of course expectations is a direct measure when used as evidence for SLO assessment. Comprehensive exams, theses, dissertations, undergraduate capstone projects, and products produced for course assignments are examples of direct measures. Written roommate agreements, if used for outcomes assessment, would also be a direct measure. Likewise, a checklist of leadership behaviors created and used in observations and ratings of students leading a meeting is a direct measure. For outcomes assessment purposes, direct measures are usually assessed twice: once by a professor or program leader for an assignment grade and again when the individual products are aggregated with other sections of the same course or over several semesters or years and are reexamined to determine whether and to what extent all who took the course or completed workshops over a specified period demonstrate the learning outcomes.

Indirect Measures

Indirect measures “require students to reflect on learning as opposed to actually displaying or demonstrating that learning” (Bresciani et al., 2009, p. 65) and they capture students’ perceptions of learning. Interviews and surveys that ask students to estimate what they think they have learned or alumni surveys asking about student experiences are examples of indirect measures. When administrators ask users of OMM to self-report how much they learn, they are using an indirect measure of learning.

In student affairs and student success units, and for many co-curricular programs, it is much easier to use indirect methods, specifically self-assessed learning and satisfaction, than it is to use direct methods of data collection. There are a variety of reasons for this, but most significant is the lack of naturally occurring opportunities for students to produce meaningful products on which there are incentives for them to do well (no required essays, tests, etc., for which grades are assigned). In addition, participation may be variable, individually focused, and voluntary (e.g., health services, advising). In these cases, students’ estimates of their learning (indirect measures) or satisfaction may be the best (or only) option. Some co-curricular programs, especially ones tied to some form of academic credit and featuring graded assignments can more easily use direct measures.

National survey instruments such as NSSE provide indirect measures of student learning and engagement when they ask students to self-report how much they learned or how often they engaged in high impact educational practices. However, there are drawbacks as well. Self-report measures don’t capture actual learning. (See Porter, 2013, for an extensive argument against NSSE and self-reported learning.) That said, self-report measures may be better than none and their effectiveness may be enhanced by using supplemental indicators such as “would you recommend” (Fraser & Wu, 2016). When using self-report estimates of learning, it is important to simply recognize what you can and can’t say about assessment data using such self-reported outcomes.

To sum up the distinction between direct and indirect methods of outcomes assessment, the director of the Office of Money Management program (OMM) is using direct methods of assessment when they assess student financial plans and results are aggregated to determine how students who participated in their programs over a year, for example, are doing on stated outcomes. Direct methods ask students to demonstrate learning as in a test or in this case a financial plan. OMM instructors may also use the financial plan to assign individual student grades—if grades are given. The OMM is using indirect measures when they ask students to fill out a survey indicating how much they think they learned of if they asked students how satisfied they were with the workshops. This can get confusing when self-reflection is the assignment for a course. Student and alumni surveys that ask students about their experiences or results from the NSSE are other good examples of typical indirect measures. It is also confusing because some well-developed tests, behavioral, or attitudinal scales, even if embedded in a survey, are considered direct measures of achievement.

Typically, accrediting agencies and SLO assessment texts prefer direct measures. I have seen some institutions require units to use one direct measure of SLOs and one indirect measure. Although I understand the point of this for academic units—to prevent an overuse of satisfaction and self-reported learning as proxies for learning—it is much more difficult for student affairs and related units to use reliable and valid direct measures. Faculty members have built in and readily accessible direct measures of learning that can be used and for which there is incentive for students to do well. Many student affairs, student success programs do not have this same luxury.

Assignment Grades and Assessment

Individual student course or overall grades are typically not used as SLO outcome data in and of themselves. Many specialized accrediting bodies will not accept individual assignment or course grades as measures of student learning outcomes. Without further analysis, grades themselves provide little feedback to program administrators on which to make improvements and they may say little about program level outcomes. Few student affairs and success programs are credit-bearing experiences for which students earn grades so this may not be an issue for them.  See Suskie (2018) for a discussion of grades and outcome assessment.

Unique Tools for Assessment of Student Learning Outcomes

There are two direct assessment data collection tools specific to SLO assessment that merit some consideration: portfolios and rubrics. Each is introduced below.

Portfolios

Portfolios are an option for assessment data collection. According to Driscoll and Wood (as cited in Henning & Roberts, 2016) “Portfolios are collections of student evidence accompanied by rationale for the contents and student reflections on the learning illustrated by the evidence” (p. 187). In some academic fields such as graphic design, art, and architecture, portfolios are collections of the student’s best work. Because portfolios typically include samples of work collected over time, they can also be used to assess development and learning in non-academic programs as well. For example, a student leadership program could have students construct portfolios of various leadership activities with student reflections and use these as an assessment tool. Portfolios are then assessed by program faculty or administrators using a rubric. One of the advantages of using portfolios is that they promote complex learning through reflection (Suskie, 2009). In addition, they provide students with some power to select their best work. One disadvantage is that portfolio systems are time and labor intensive to develop, and they typically require some sort of technology-based mechanism for storing and display. See Suskie, (2009) for an in-depth discussion of portfolios as an assessment tool.

Rubrics

In recent years, rubrics have become a tool of choice for assessing direct evidence of SLOs. According to Stevens and Levi (2005):

At its most basic level, a rubric is a scoring tool that lays out the specific expectations for an assignment. Rubrics divide an assignment into its component parts and provide a detailed description of what constitutes acceptable or unacceptable levels of performance for each of those parts. Rubrics can be used for grading a large variety of assignments and tasks: research papers, book critiques, discussion participation, laboratory reports, portfolios, group work, oral presentations, and more. (p. 3)

   The best known and most widely used assessment rubrics have been developed by the American Association of Colleges and Universities (AACU) VALUE Project. The AACU rubrics cover an extensive list of academic skills. The rubrics are used widely at all kinds of colleges and universities across the U.S. to assess general education learning.    https://www.aacu.org/value/rubrics.

If you are going to develop your own rubric, Suskie (2009) identifies several types of rubrics: (See Suskie, 2009, pp. 138-148.)

  • Checklist rubrics are the simplest and consist of a list of indicators of what you are looking for in an assignment.
  • Rating scale rubrics include both the checklist and a rating scale allowing the assessor to indicate the degree to which the item is present and of quality.
  • Descriptive rubrics are the most detailed and the most difficult to create. They include descriptors of what each performance level looks like for each key component of the assignment.
  • Holistic scoring guides do not identify specific elements of the assignment as in the checklists described above, but they provide short narrative descriptions for each level of achievement on the assignment. For example, a description would be provided for what constitutes an excellent, a very good, good, below average, and unacceptable.
  • Structured observations guides may be useful for assessing certain kinds of behavior, for example leadership skills. Such guides identify specific behaviors and the observers often then counts whether the observed does them and how frequently and perhaps how well.

     Descriptive rubrics are particularly helpful because they describe characteristics or markers of each level of performance on each assignment component or outcome (criterion). For this reason, they also serve as a guide for students as to what components and assignments will be assessed and what constitutes good work on each component. Unfortunately, this form of rubric is very hard to create; it may be best to begin with simpler forms of rubrics, such as checklists, or lists with performance levels generally described.

Rubrics can be used to grade individual assignments and to communicate those results to students. When used for outcomes assessment, scores on individual assignments and rubrics are aggregated across students from various sections of a course across several semesters or other activities and then are analyzed for trends to determine whether students are meeting the desired outcomes of a particular learning experience. Rubrics are also used to grade comprehensive assignments such as assessing portfolios or comprehensive examinations that assess levels of attainment on program level outcomes or dimensions.

When used for SLO assessment, student performance in a course or activity is aggregated across many students to identify general patterns of student achievement as a way of assessing how well a program is meeting its objectives. The beauty of rubrics is that a score is easily associated with a verbal description of accomplishment.

A rubric may be more difficult to use when assessing outcomes of co-curricular programs because cocurricular program do not often require naturally occurring “assignments” that lend themselves to scoring with rubrics. Examples of places rubrics might be used to assess outcomes in co-curricular programs include reflections written at the end of a study abroad, service learning, or undergraduate research experience; involvement portfolios; evaluations of students leading a meeting, and evaluations of actual written assignments, such as the example provided for the OMM workshops example above. Rubrics can be effective assessment tools in any place where direct measures of assessment are appropriate. A check-list rubric is often used in observations of interpersonal skill such as the ability to lead a meeting.

Creativity is often needed to identify appropriate outcome measures and to assess them in student affairs programs.

Data Collection for OMM: An Example

Methods must be appropriate to the outcome and can be quite creative. Sometimes learning outcome data­—what students have learned and are able to do­­—can be gathered from surveys (indirect measures). This is especially true for capturing self-reported learning gains (e.g., by participating in Office of Money Management programs, I learned a lot, a little or nothing about financial planning). Sometimes they are derived from actual assignments and tests. In academic programs, use of rubrics to assess student learning in written assignments (direct measures) is common.

 

Two cautions:

The OMM program has the same problem as many student affairs and co-curricular programs. Participation is more than likely voluntary. OMM likely can’t force participation unless their workshops are academic credit bearing. The result is participation/exposure to workshops and other activities may be quite variable. A student might not participate at all or might participate in some activities but not others. And, unless there is a grade attached, the incentive to perform one’s best may be lacking. Any assessment tasks should be short, fun, engaging, and useful. This complicates assessing outcomes. If OMM workshops were offered as part of an orientation course, using direct assessment would be easier. Second, the outcomes should be focused on what the program does best. To use an extreme example, it is unreasonable to think that OMM’s mission would include helping students make good course choices or to lose weight. Rather, OMM should identify student learning outcomes or student success metrics related to what the office can reasonably influence and be held responsible for and assess. 

 

In the case of use of the Office of Money Management example, outcomes data might be collected as follows:

Outcome Way Outcome Assess/Data Collected
Students can identify principles of good financial management Questionnaire administered on exit. Ask students to indicate level of competence (indirect)
Students can apply these principles to analyze their own financial status Rubric used to score analysis of financial status (direct) or a questionnaire asking for self-assessment (indirect)
Students develop a plan for managing their money Rubric used to evaluate plans (Direct)

    Rubrics could easily be developed and used to assess assignments attached to the OMM learning outcomes identified above and for many other outcomes from co and extracurricular programs.

Effective SLO Assessment

Effective SLO assessment focuses on a unit’s most important learning goals, should produce accurate and useful evidence, and should do so in an unbiased and ethically sound way (Suskie, 2018, pp. 24-34).

Personal Observations from the Field

For all of the institutional investment in SLO assessment the results should be useful. After teaching an evaluation course for years and doing numerous peer accreditation reviews for the Higher Learning Commission over a 20+ year period, I have come to the conclusion that many student affairs, success, and co-curricular units should think carefully about which of their efforts should specifically focus on student learning outcomes and which might benefit from some other forms of assessment, namely, regular monitoring of Suskie’s (2018) student success measures. Her distinction between student learning and student success goals put into words what I have thought for years as I have watched student affairs, success, and co-curricular units struggle to develop their their student learning outcomes assessment plans and report their findings.

It is very difficult for units that offer very individualized services such as advising or tutoring, or even recreation services, and writing centers, to identify a singular set of meaningful student learning outcomes that fit all students much less provide a significant, meaningful, and planned curriculum that can reasonably produce those outcomes. Student participation in these types of events is variable and intentionally designed curriculum may or may not exist. Moreover, individual student needs, expectations, and the services and amount of exposure provided may vary widely. The “program” maybe unique for each person. And yet, these units often feel under pressure, for a variety of reasons, to create traditional student learning outcomes when student success measures may be more appropriate. SLO assessment makes sense for units that provide thoughtful, carefully planned, meaningful learning experiences leading to significant learning. For many programs, however, well-designed, systematically collected measures of student success may be a better use of their time.

It might be more useful for offices providing individually tailored experiences to take a two-pronged approach: 1) the yearly inquiry project approach in which they identify and collect data on a question of interest related to their core mission and its effects for students, and/or 2) identify a meaningful set of success indicators on which to routinely monitor and assess their effectiveness in helping students succeed. Such a set of indicators might include participation, routine satisfaction surveys, use of self-reported outcomes (e.g., I feel confident choosing courses) collected and examined regularly, along with results from periodic targeted studies. In fact, annual student affairs assessment reports at many institutions include not only results of the yearly assessment project but also other types of data demonstrating office productivity.

When I do peer accreditation reviews for the Higher Learning Commission and am assigned to the assessment criterion, I typically look for several kinds of assessment data from co-curricular, student affairs, and student success programs: Are programs reaching intended audiences, do units systematically collect data (any kind of data, including satisfaction) on their  outcomes beyond participation, and what does that data show about the extent to which programs are producing the intended results? And, finally, I also look for evidence that data are being used to inform program improvements (closing the loop). In other words, do student affairs, success, and co-curricular units demonstrate a culture of evidence and continuous improvement.

All this said, administrators in student affairs and other related areas should pay attention to what their chief administrators want student learning outcomes to accomplish internally and the purposes for which they use the results.

Summary

Assessment of student learning outcomes is a major type of assessment and evaluation. It relies on principles from the outcomes assessment as a general type of assessment and evaluation, and involves specifying outcomes that are meaningful and observable, identifying methods of collecting data on the specified outcomes, analyzing the data to inform improvements in the program that will lead to higher outcomes in the future.