15 Research Designs
Key Topics
- Paradigms
- What is research design
- Why is research design important
Introduction
Students and practitioners tend to think of research as a mysterious, complicated technical task requiring very advanced skills in research design, data collection, and data analysis involving math and statistics. To a certain extent, this is true. The tasks involved in carrying out evaluations involve basic knowledge of research design and methods. However, research is first an exercise in logical thinking: How do you design a study and collect data to answer questions about a problem, program, or intervention that allow you to be as certain as you can be with the data you have that the findings—and the conclusions you draw from them—are accurate, and in the case of programs and interventions, point to program contributions to the stated outcome? The answers to these questions begins with two critical questions: From whom do you have or can you collect data and do you have control over who gets to participate in the program and when? Because evaluators typically want to make claims about the effectiveness of programs in producing outcomes, the issue of design is critically important in determining the kinds of claims that can be made.
This chapter briefly reviews common research designs that social scientists and evaluators have developed to address these questions about programs with which they work. This section draws heavily on Mertler’s (2016) Introduction to Educational Research, 2nd edition and has an admittedly quantitative slant to it. Even though I am personally a qualitative researcher, I have to admit that quantitative methods are more common, especially in outcomes assessment work. Qualitative methods do have a place in evaluation though and are briefly covered in Chapter 18.
About this point you are likely groaning and asking why you need to learn this stuff, especially with its emphasis on quantitative studies. The answer goes something like this: all administrators need to know when and how to create and administer good surveys, know how to analyze and interpret basic survey results, and to understand what kinds of conclusions can reasonably be drawn based on a study’s design and findings.
Beyond doing your own data collection, you will read reports and sit in meetings listening to results of quantitative analyses others have done. Colleges and universities are very metrics driven these days. You need to know what questions to ask about data presented to you, whether what you are seeing and hearing makes sense, and whether conclusions drawn are reasonable. My dean is always finding problems with data he is given from our office of institutional research. As you move higher in the administrative ranks and responsibilities, the expectations that you can use and interpret data will be greater. Moreover, you will likely be in a position to collect your own data to communicate with and convince others about what your programs do and how they are effective. Accept the challenge!
Qualitative data can play an important role in program evaluations when used for specific purposes, such as understanding need; however, qualitative methods are simply too time and labor intensive for colleges and universities to routinely use in assessment or to inform decision making. So, learning a little more about quantitative research is important! And believe me, “getting it” at some level is empowering!
Two Fundamental Issues
There are two overriding decisions that affect a research project of any sort. They overlay all other considerations about evaluation research. The first is the research paradigm the implications that has for the kinds of questions asked and information one collects and the second is about how a study is designed to collect data and from whom. Both of these issues are well-covered in most introductory research texts and are merely introduced here.
Research Paradigm
Research paradigm refers to beliefs about what constitutes knowledge and how knowledge is gathered, constructed, and validated. Most quantitative research today falls into the category of a postpostivist paradigm. Some of the assumptions of this view are that there is a social reality that exists independent of evaluators and individual participants that can be captured through objective observation and measurement of phenomena. Because social science research deals with humans, however, postpostivism acknowledges that social science research can never fully reveal the “truth.” Still, social scientists operating from this paradigm use quantitative methods to try to come as close to identifying “the truth” as they can.
Qualitative research methods are rooted in an interpretive, constructivist paradigm that views knowledge as emergent and constructed by individuals, who, in interaction with their surroundings, give meaning to their life experiences. These meanings are interpreted in a research study through the investigator who brings their own positionality to the study. In other words, there is no reality “out there” waiting to be discovered. Rather, reality is constructed by participants and the evaluator. Recent authors such as Creswell and Creswell (2018), Henning and Roberts (2024), and almost any qualitative or general research methods book describe different philosophies about knowledge and knowledge construction (i.e., worldviews) and their implications for research in more depth.
In scholarly research, researchers typically do research guided by their worldview. They focus on questions and methods suitable for this paradigm. Evaluators do not have this luxury. Research paradigm has particular implications for research design in evaluation studies as it has consequences for the types of evaluation questions asked, methods used to collect data, and the kinds of findings generated and vice versa—the kinds questions you want answered will determine the appropriate paradigm and thus the type of methods used. Sometimes the data you have at hand will drive the kinds of questions you can ask and answer. This is especially true for assessment and evaluation projects. Most researchers these days are familiar with research traditions fitting both quantitative and qualitative paradigms. Some questions about the social world and programs lend themselves to a quantitative approach (e.g., is there a difference in performance, does program cause the outcome) and some lend themselves to qualitative approaches (e.g., how do people understand and experience a problem). Comprehensive evaluations often employ mixed methods using both quantitative and qualitative methods to answer questions about program effectiveness.
Recent books about assessment (e.g., Henning & Roberts, 2024) identify other worldviews that might be appropriate for use in assessment work on college campuses, such as those emanating from Critical Race Theory or an Indigenous worldview. It is important to consider these frameworks. Just be aware that the traditional social science research methods are based on a Western conception of knowledge and how it is constructed and valued and may not be consistent with the assumptions of other worldviews. New and different ways of knowing also require new and different ways of thinking about what constitutes data, and how to collect interpret it. Most notably, outcomes-oriented assessment and evaluation are based on certain fundamental assumptions beginning with the notion that what participants should know or be able to do as a result of participation in a program can be identified, be made explicit (made real), can be taught, and can be measured. These assumptions have implications about appropriate methods for assessing outcome attainment.
One alternative to outcomes focused assessment—goal-free assessment—focuses on what program participants can do regardless of the program’s intended outcomes. Although a valuable approach to assessment and evaluation, it is an alternative to outcomes-based assessment and its assumptions are at odds with an outcomes oriented approach.
Research Design
The second issue of critical importance is research design. This is true for qualitative research as well as quantitative although the focus of this and the next two chapters is mostly on designs for quantitative studies. Rovai, Baker, and Ponton (2014) define research design as “a logical blueprint for research that focuses on the logical structure of the research and identifies how research participants are grouped and when data are to be collected” (p. 49). Creswell and Creswell (2018) explain that research designs provide specific direction for the procedures in a study or evaluation, as well as determine how data are analyzed and what conclusions can be drawn.
The most rigorous quantitative research designs seek to rule out as many alternative explanations, or threats to validity of findings (how might you be wrong), as possible to determine that a program causes the desired outcomes. Other quantitative designs allow the evaluator to describe findings and to establish relationships between and among program participation and outcomes but not determine that the intervention caused the outcome.
Most qualitative designs seek to provide rich understandings of participant experiences with problems and programs, but given qualitative sampling strategies—talking with people who participated in a program—given their small and purposive sampling strategies, qualitative methods do not easily allow evaluators to establish correlation or causation and to examine differences. One could also argue that the interpretive and co-constructed nature of qualitative data poses challenges to ascribing participant knowledge, skill, behavior and attitudinal outcomes to program participation as qualitative findings are often co-constructed by interviewer and interviewee.
These two issues, knowledge paradigm and design, are crucial in social science research and especially so in evaluation studies in which the goal is to understand the effect of a program on participant outcomes. Some designs allow one to describe outcomes while others are necessary to conclude that outcomes are caused by the program. Once an appropriate design is chosen, suitable methods of data collection and analysis need to be identified. Qualitative and quantitative methods and data can be used to assess needs, program operation and implementation; however quantitative data are typical for outcomes assessment and often preferred by decision makers as being easier to collect and analyze, and for providing more convincing data and generalizable findings about outcomes. Standard methods for collecting and analyzing qualitative and quantitative data, such as interviews, surveys, tests, inventories can be used in both descriptive and mixed methods evaluations. Although, even when seemingly qualitative data are collected in evaluations, they are often converted into numbers for analysis purposes! The one case of outcomes assessment well-suited to qualitative methods is that of goal-free evaluation that examines outcomes without regard to stated purposes of a program. See Chapter 18.
An Example Case:
To assist with the designs described in the next several chapters, I use the example of a hypothetical first year orientation course, which I, not surprisingly, call UNIV 101. For this example, UNIV 101 is a one credit hour course available to all new students. Although not required to take the course, all first-year students are strongly encouraged to do so. Students with weak high school records are specifically encouraged to take the course. Desired hypothetical outcomes include the following:
- Students will be able to identify a variety of support services available to them and the purpose of the services as indicated by scoring above 70% on a quiz of campus support services.
- Eighty percent of students who take the course will be able to locate ten academic sources through the library and will be able to identify their strengths and limitations.
- Students who take the course will be engaged academically and socially as measured by scores on a student engagement scale patterned after the National Survey of Student Engagement.
- Eighty-five percent of students who take UNIV 101 will be retained to the second semester. (An important outcome but technically not a student learning outcome.)
University administrators likely have other operational goals for UNIV 101 they would assess using measures of student success. These include the number and percentage of first year students taking the course, and satisfaction with the course, retention to next semester, among others.
Outcomes will be assessed using the following methods:
Outcome | Way Outcome is Assessed | Type |
Knowledge of support services | Results of quiz about campus services | Direct |
Ability to locate and critique sources | Rubric applied to library assignment | Direct |
Will be engaged academically | Survey assessing degree of engagement | Indirect |
Retained to second semester | Student is enrolled in 2nd semester | Direct |
Instructors in all sections spend at least two class sessions specifically on campus services and give a quiz on the services offered. Additionally, all instructors assign the same library assignment that requires students to locate and critique sources on a specific topic. Program coordinators have engaged in assessment of the program’s operational features and are confident that the UNIV101 syllabus covers the topics being assessed and, even more importantly, that instructors are covering the material in their sections. Data from the student course evaluation instrument suggests that students are satisfied that the course meets the stated course goals and that the instructors are competent.
Note: Suskie (2018) would likely consider these student success outcomes with perhaps some of them also falling into the student learning category.
Research Designs: A Brief Overview
Program evaluation is a particular type of social science research the general goal of which is to determine the effectiveness of a program or intervention. Because of this specific purpose, the discussion of research designs in this and the next chapter are centered around two questions about the program or intervention: 1) From whom do you have or can you collect data? Do you have data from both participants and a similar group of nonparticipants or only participants and 2) Can you control who gets the treatment and when? In quantitative studies, particularly evaluations that want to know if a program or intervention caused the outcome, the structure or design of the study is critical. Typical research methods texts group quantitative research designs for assessing outcomes into three categories: Descriptive designs, causal comparative designs, and experimental designs. Mertler (2016) identifies correlational studies as a separate category; but in higher education, many descriptive studies also include correlational questions so I include them as one type of descriptive analysis. Research design is important because it not only structures how a study is conducted but also how and when participants experience the intervention, when data are gathered, the types of conclusions that can be drawn from the results, and the confidence with which the findings can be attributed to the program. Three essential questions are fundamental in determining which design can and should be used:
- What questions do you want to answer? What questions will your data allow you to answer?
- From whom do you or can you collect/have data? Specifically, do you have or can have data from individuals who participated in the program only or do you have, or can you have, data from individuals who participated in a program and a group of equivalent individuals who did not.
- Can you manipulate who gets the intervention (participates in the program) and when they get it?
Specific research designs are discussed in the next three chapters. Chapter 16 covers descriptive and correlational studies when you have data from individuals who participated in a program that seek to determine the effects of a program on the program participants when no control group is present. Chapter 17 introduces designs used for comparing outcomes for participants and non-participants, and Chapter 18 provides a brief overview of qualitative and mixed methods designs. Because this is a text on program evaluation, the effect of interventions (programs) is the central focus. Thus, whether participants in evaluations have experienced the intervention or not is key to the types of studies that can be done and, significantly, to the conclusions that can be drawn.
There are dozens of qualitative and quantitative methods books each of which discusses specific research designs and methods in far more detail than I do here. For in-depth information on specific methods–such as how to design a good questionnaire or develop a good valid and reliable pre and posttest measure—you would do well to find sources specifically about developing valid and reliable measures and surveys. Mertler (2016) offers a particularly good overview of basic research designs. See Creswell and Creswell (2018), or Henning and Roberts (2024), Schuh et al., (2016) as alternatives. The latter two are more focused specifically on student learning outcomes in student affairs programs.