1 Premises, Assumptions, and Approaches
Introduction
This book is about assessment and program evaluation in colleges and universities. More specifically, it is about how assessment and evaluation tools can be used to plan effective programs, to know whether programs work as intended, and to know whether and to what extent they achieve their goals. Assessment and evaluation skills have become critically important for college administrators regardless of the functional area or position in which they find themselves. Administrators not currently involved in assessment and evaluation activities will need to become familiar with them as they move up in administrative ranks. They will either participate in or do assessment and evaluation themselves or oversee assessment efforts of others, including for an institution as a whole (e.g., accreditation reviews).
The premise of the book is that for higher education administrators, assessment and evaluation are first and foremost formal exercises in asking and answering questions about the problems for which they seek to develop effective programmatic responses and the effectiveness of the programs and services they offer. Engaging in regular assessment and evaluation is necessary to ensure that colleges and universities fulfill their missions and do so effectively. As noted in subsequent chapters, much assessment and evaluation work in higher education is motivated by pressures to be accountable, accountable to colleagues, to those who provide funding for our work, and to students, and their families, taxpayers, state legislators, and donors. Although accountability and funding justification get the most attention as rationales for engaging in assessment and evaluation, it is actually their role in planning, implementing, and improving programs on which administrators and faculty spend most time to ensure these efforts achieve their goals. It is this second role that will take primacy of place in this text.
There is an odd gap in the assessment and evaluation literature. Most program evaluation texts do not focus on the unique types of assessment and evaluation used in higher education and challenges of conducting them in college and university settings. Most books that focus on assessment in higher education focus primarily or exclusively on student learning outcomes assessment. They tend not to discuss the other types of evaluation in any depth. This book seeks to combine the best of both on the assumption that most college administrators should have evaluation skills that extend beyond assessing student learning outcomes.
Approach to Evaluation
There are many approaches to assessment and evaluation; the differences are significant. A search for books on program evaluation yields a vast number of them, reflective of the fact that the authors and each field conceptualize program evaluation a bit differently. A brief review of texts you are likely to find reveals much about different approaches to conceptualizing assessment and evaluation.
Many assessment and evaluation texts are focused around a particular approach to program evaluation or to evaluation in a particular context such as social work, community development, education, or health care. These texts may use language and evaluation techniques specific to the field. Then there are the more general evaluation texts that center specific approaches or methods for conducting program evaluation. Fitzpatrick et al. (2004) organize various approaches to evaluation under the following headings that reflect the central method or perspective: (1) objectives-oriented approaches, (2) management-oriented approaches, (3) consumer-oriented approaches, (4) expertise-oriented approaches, and (5) participant-oriented approaches. Each of these five categories includes multiple evaluation models that exemplify the type. For example, the pioneering work of Ralph Tyler is considered an example of objectives-oriented evaluation. Stufflebeam’s Context, Input, Process, and Product (CIPP) model is an example of a management-oriented approach to evaluation. Consumer-oriented approaches are those that provide models for evaluating with consumers in mind. Accreditation is an example of expertise-oriented evaluation, and Stake’s “responsive evaluation” is an example of participant-oriented, formative evaluation. Some authors consider needs assessment to be an evaluation activity, while others do not include it as such. Some evaluation authors advocate for one particular method.
Other authors are more eclectic and pragmatic in their approaches, borrowing pieces from the well-defined models. Rossi et al. (2004), Owen (2007), the Kellogg Foundation, and the Workgroup for Community Health and Development (n.d.) fall into this category. Less concerned about following one specific model, these authors focus on the purposes of the evaluation and the types of questions one might ask about a program and tailor their questions and methods to answer their questions.
Based on my experience working in higher education and teaching a program evaluation course for years, the more eclectic and pragmatic approaches of authors such as Rossi et al. (2004) seem most appropriate. The diversity of activities on any college campus calls for such an eclectic, pragmatic approach. Rarely is one model perfectly suited for a particular evaluation. The eclectic approach also fits with books about assessment in higher education and student affairs that take an eclectic approach (e.g., Bresciani, et al., 2004; Henning & Roberts, 2016, 2024; Schuh et al., 2016; Suskie, 2009; Walvoord, 2010).
Asking and Answering Questions
I take the position that program evaluation is first and foremost an exercise in asking questions about the problems colleges and universities face, the needs of their clients, and the programs they plan and implement to respond to the problems and to meet the needs. Consequently, understanding what questions to ask, the purpose for asking them, and knowing what kinds of data are available or needed to answer them are far more important to carrying out useful evaluations than strictly adopting and following a particular evaluation model or being a whiz at statistics. As one becomes more experienced with evaluation, mastering specific evaluation models may be a useful undertaking.
This book addresses questions and evaluation approaches around five types or purposes that evaluation activities typically seek to serve (Rossi et al., 2004) and one additional type. These purposes are:
- Assessing need as the basis for planning sound programs.
- Creating or identifying a program’s logic model as the basis for designing sound programs.
- Assessing program implementation and operations to ensure a program is implemented well and as intended.
- Assessing outcomes including student learning outcomes; determining program impact.
- Assessing program efficiency and cost-effectiveness.
- Accreditation and program review (not part of Rossi et al.’s types) to ensure colleges and universities meet standards of quality.
These purposes correspond to different families of evaluative questions you might ask about a program from development to maturity. Needs assessment and the logic model are activities involved in program planning and development whereas the rest of the types deal with assessing existing programs.
Assumptions
I make several assumptions about evaluation and the users of this book.
Readers Occupy a Range of Administrative Positions
I assume that readers are preparing for or hold a broad range of administrative positions in higher education or related not-for-profit organizations. Although they likely do not intend to be professional evaluators, all administrators find themselves in a position to develop new programs or evaluate existing ones, to participate in evaluations of other programs with which they work closely, or even to oversee their unit’s assessment program. As a result, administrators need to know how use needs assessment to plan effective programs, know whether programs have been implemented well and how they can be improved, and know how to assess outcomes of the programs with which they are involved. Knowing what questions need to be answered in each type of assessment and what type of assessment and evaluation and how to gather data to answer them are the central foci of this book.
ACPA (College Student Educators International)/NASPA (Student Affairs Administrators in Higher Education) (ACPA/NASPA, n.d.) have identified what seems a very ambitious set of competencies in assessment, evaluation, and research, which, although targeted to student affairs professionals, is sufficiently general as to apply widely to administrators in a wide range of functional area units in higher education. This book seeks to provide a base for acquisition and practice of some of these competencies. With this information, you will be able to plan interventions based on a good understanding of a problem, improve the delivery of your programs, identify the outcomes of your programs, and know whether the program itself is responsible for the outcomes.
Limited Background with Research Methods
The book assumes that readers will have taken an introductory research methods course of some sort but will otherwise have limited background in research methods and statistics. This book will not fill this gap; however, it does introduce important research design considerations and data collection and analysis tools. Most general graduate research methods and statistics textbooks contain more in-depth discussions of the methods and statistics topics introduced in this book. This text seeks to introduce program evaluation as a process that is first a conceptual and logical task. This work begins with formulating appropriate questions about the programs with which you are involved and then identifying appropriate data that can be systematically collected and analyzed. The work extends to understanding what kinds of conclusions can be drawn based on the data collected. College and universities are rich sources of individuals with expertise in quantitative or qualitative research methods to help you design and carry out sophisticated evaluation studies if necessary. That said, administrators will be well served by having solid knowledge of assessment and evaluation, specifically how to construct good questionnaires and interviews and to how to interpret and communicate results.
Varying Roles Related to Assessment & Evaluation
Administrators’ relationship to, and role in, assessment and evaluation will vary somewhat depending on location in the administrative hierarchy. Senior administrators and those who work for state higher education agency offices focus on more global indicators of institutional quality and effectiveness such as retention rates, faculty productivity measures, accreditation, and program review. In addition, they must make sure that assessment activity meets expectations for governing boards and accreditors and for reporting results to these groups. Program directors are in the position of having to develop and conduct assessment of individual programs in their areas of responsibility. Functional area/division directors, department chairs, and deans find themselves in the middle having to look both ways. They need to understand and provide data needed by upper-level administrators while interpreting institutional data to and from their staffs. Additionally, they must be able to guide their own staffs in creation and implementation of much more focused effective evaluation activities and to obtain and provide resources to do assessment. Even if functional area/division directors hire assessment specialists, they must know enough to be able tdevelop assessment plans for their unit/college or other workplace as, to educate staff about assessment and evaluation, and to know what assessment data are telling them.
Assessment: A Value Laden Processes
Despite their frequent association with carefully constructed, presumably neutral, quantitative research methods, assessment and evaluation are not value neutral processes. Values come into play in at least three ways. First, when program leaders identify outcomes, they are making value statements about what is important. When an accreditation association specifies, for example, that institutions or programs should provide evidence of graduates’ gainful post-graduation employment, they inherently send a message that a primary goal of higher education is post-graduation employment. To be sure, gainful employment for graduates is an important goal, however, it is not the only goal of postsecondary education. The same is true when faculty members or student affairs administrators identify learning outcomes from a course, academic, or student affairs program..
Second, the very premises of student learning outcomes assessment view colleges and universities as rational, bureaucratic organizations. This perspective assumes that program or curriculum development, for example, is and should be a rational process. One identifies purposes and goals, creates educational tasks to achieve those goals, and then determines the extent to which purposes have been met. By using tried and true scientific methods, outcomes of these endeavors can be discovered and attributed to the program. There are a number of problems with this view. Anyone who has ever been involved in curriculum, or other program development knows that they can be anything other than logical and rational processes.
Third, the methods for conducting assessment and evaluation associated with this rational worldview come with significant baggage. As will be discussed later, the rationalist worldview on which much assessment and evaluation is based, has come under significant criticism for its role in fostering social, educational, and economic inequities. I will say more about the implications of this context throughout the text. That said, with awareness, these tools can be used to foster equity rather than to perpetuate inequity. It is incumbent on the evaluator to understand these embedded assumptions and to use the tools to serve equitable purposes. For a discussion of ways of integrating equity into student affairs assessment in particular, see Henning and Roberts (2024).
Overview of the Book
My approach to assessment and evaluation, and thus this text, has been shaped by my experiences as a faculty member, as a participant in dozens of university committees tasked in one way or another with an assessment or evaluation activities, experience teaching an evaluation course for many years, and my more than 20 years conducting accreditation reviews for the Higher Learning Commission. The latter particularly has shaped my views about assessing student learning outcomes.
This text sets itself apart from student affairs assessment books, such as those by Henning and Roberts (2016 and 2024) and Schuh et al., (2016), and the many books on student learning outcomes assessment (e.g., Suskie, 2009; Walvoord, 2010), with its broader, more in-depth, focus on other assessment types, especially needs assessment and operation and implementation assessment. Although student learning outcomes assessment will be an important part of many college administrators’ role, a broader set of evaluation skills than just those associated with student learning outcomes assessment are necessary for today’s administrators.
Because this book serves both entry level and mid-level administrators, it attempts to hit a middle ground between new master’s students at the beginning of their careers and seasoned professionals both in student affairs and related units in other areas of college and university activity. Schuh, et al., (2016), and Henning and Roberts (2016 and 2024) offer more detailed methodological guidance applied specifically related to assessment in student affairs, and there are literally dozens of books written about assessment of student learning outcomes in academic programs. Similarly, there are literally dozens of introductory to educational research texts.
Finally, the book is practical in that it includes many examples drawn from higher education. It attempts to avoid the highly specific jargon (or I try to explain the common terms) and the methodological detail of many evaluation texts. The book’s main purpose is to introduce you to the families of questions you can and should be asking about your programs and to methods for gathering data to answer them. You will note that I often present different ways of talking about the same evaluation concept or present different terms. I do this because you may work with people who learned different language around some of the terms and processes. The field of assessment in higher education has advanced and changed significantly over time and with it terms and methods have evolved as well.
The book is organized in four parts. Part I (Chapter 1) is the introduction to the book laying out the approach to assessment and evaluation and also the assumptions on which the book is based. Part II includes five chapters on topics foundational to program assessment and evaluation. Chapter 2 introduces the special case of evaluation in higher education. It begins with some historical background, then discusses the overarching purposes for assessment and evaluation in the higher education context, and some of the challenges of doing assessment and evaluation in higher education. Chapter 3 includes definitions of important terms, introduces types of evaluation, ways of thinking about the relationship among evaluation types, and the differences between research and evaluation. Chapter 4 covers the general logic of evaluation activities and the importance of and components of an evaluation plan. Chapter 5 introduces different types of outcomes, characteristics of outcome statements, and introduces a model for stating outcomes.
Part III is the heart of the book. This section features a chapter for each of the evaluation types covered in the book as well as a chapter on accreditation and program review and one covering critiques of student learning outcomes assessment. Chapters 6 and 7 focus on needs assessment and logic models respectively. These are two important evaluation tasks associated with program planning and development. Chapter 8 introduces operation and implementation assessment with its focus on the enacted program. Chapters 9 and 10 tackle outcomes assessment with Chapter 9 introducing the general purpose and plan for conducting an outcome assessment whereas Chapter 10 deals with the specific case of assessing student learning outcomes. Chapter 11 introduces you to and asks you to consider some critiques of student learning outcomes assessment as a specific type of outcomes assessment. The final two chapters in Part III briefly cover cost-benefit/cost-effectiveness studies and also accreditation and program review.
Part IV consists of one chapter (Chapter 14) about communicating the results of assessment and evaluation work. Part V includes four chapters on research designs. Chapter 15 provides an overview of different types of designs and why research design is important to evaluation studies. Chapter 16 covers descriptive designs appropriate for studies involving only program participants. Chapter 17 introduces designs involving comparison groups including one shot pre and posttest designs, causal comparative, experimental, and quasi-experimental designs. And finally, Chapter 18 briefly introduces qualitative designs, including goal-free evaluation and mixed methods. It concludes with some thoughts about method choice.
NOTE: You will likely find errors in APA citation formatting as well as typos. APA has changed its citations “rules” at least twice since I first wrote this book. I apologize in advance not being an expert in working with Pressbooks and its formatting conventions. Most notably the book uses Pressbooks heading levels formatting, which does not follow APA guidelines.