18 Qualitative, Mixed Methods Designs, and Concluding Thoughts

Introduction

In this final chapter I briefly discuss the use of qualitative methods in assessment and evaluation, goal-free evaluation, mixed methods, routine monitoring of outcomes, choosing a research design, and some concluding comments about using qualitative and quantitative methods for achieving equity in higher education.

Qualitative Methods

Qualitative methods are well-suited for assessments that seek to understand problems and experiences as in a needs assessment, whether programs are implemented well, and, in some cases, to describe outcomes—either specific outcomes or in a goal free way to find out what participants learned regardless of intended outcomes. Traditional qualitative methods (interview, focus groups) have limitations for assessing outcomes and as a basis for judgments about a program’s role in causing the outcomes.

There are different forms of qualitative research all of which seek in some way to understand and interpret participants’ experiences and perspectives but have different points of emphasis. For example, conducting an ethnography presumes embedding oneself in a setting for a long period time to understand the culture of a place and its inhabitants. Narrative inquiry focuses on stories, grounded theory attempts to develop theory, whereas phenomenology zeroes in on the essences of experiences. Case study lends itself to use in evaluation with its focus on a bounded system, and collection and use of multiple sorts of data to understand a phenomenon and outcomes in context. Basic interpretive interview studies borrow aspects of other qualitative traditions. Each qualitative tradition has a specific focus that suggests specific approaches to design, data collection, and analysis.

When qualitative methods are used in evaluation studies, they are most likely to follow basic interpretive or case study approaches using interviews and focus groups. These techniques are quite common in needs assessment and to assess implementation or operational aspects of a program or as complements to data collected in comprehensive evaluations. They are less frequently employed in SLO assessment because of the time involved and difficulty interviewing many students or other participants. Administrators may question the validity of qualitative data with its small samples. Additionally, the precise, operational definitions of outcomes so valued in outcome assessment and evaluation are antithetical to many of the philosophical assumptions undergirding qualitative research.

An example illustrates some of the challenges. For years, the University of Kansas assessed general education outcomes using interviews. Students were randomly invited to participate in (and be paid for) an interview with a panel of three faculty members. A committee of faculty participants came up with several question options for each outcome and faculty interviewers participated in “training.” Following a 30-45 minute interview, each faculty member rated the student on the outcomes using a scale from 1-5. The process was quite interesting, not to mention labor intensive. The participating faculty members learned a lot and many said they changed the way they taught as a result of having done the interviews. The outcome data, however, were viewed as suspect by academic deans and the faculty at large.  Even when the data analysts established inter-rater reliability, and conducted fairly sophisticated statistical analyses on the data, administrators had a hard time finding the results credible because they were based on small sample sizes. This case illustrates at least three challenges for individuals wanting to use qualitative methods in assessment:  1) The process was time and labor intensive. 2) Even though data were collected through interviews, they were converted to rating scale scores (1-5) for quantitative data analysis purposes. 3) It was hard to convince decision makers that the data were as good as those obtained from “tests” or other presumably objective measures.

Other potentially qualitative methods include applying rubrics to assignments or portfolios, for example. However, while this seems like a qualitative approach on the surface,  rubric data are usually reduced to numbers. For example, novice=1, intermediate=2, and expert=3. Assignments are coded, ratings assigned, and descriptive statistics are reported.

Goal-Free Evaluation

In addition to using qualitative methods in needs assessment, operation/implementation assessment, and to assess satisfaction, they are also particularly appropriate for goal-free evaluation. Most evaluation strategies are either implicitly or explicitly focused on the extent to which a program achieves its intended goals or outcomes. Goal-free evaluation rejects an outcome, goal-oriented approach all together (Fitzpatrick et al., 2004). Rather than beginning with the program’s stated goals and outcomes as the target of evaluation, the goal-free approach seeks to inductively learn what program participants gain from participating regardless of the program’s stated goals. This approach typically relies on qualitative methods involving open-ended interviews with program participants ignoring a program’s goals. Rather, evaluators listen to what participants have to say about the program and make judgments inductively based on what they hear. Goal free evaluation is also a good source of information about unintended outcomes, occasionally yielding the most interesting findings of all.

Mixed Methods Designs

Comprehensive evaluations, a form of case study design, often employ both qualitative and quantitative methods—both interviews, document analysis, and surveys or largescale quantitative data sets, for example. Creswell and Plano (2007) define mixed methods this way:

As a method, it [mixed methods] focuses on collecting, analyzing, and mixing both quantitative and qualitative data in a single study or series of studies. Its central premise is that the use of quantitative and qualitative approaches in combination provides a better understanding of research problems than either approach alone. p. 5

As such, mixed methods are ideally suited to comprehensive program evaluations. Proponents of mixed methods research stress the point that mixed methods are more than simply collecting interview and survey data. To get a better understanding of this it is helpful to briefly review specific mixed methods designs and data collection strategies.  These are discussed in more detail in Creswell and Plano (2007) and Creswell and Creswell (2018).

  1. Triangulation: This design involves “different but complementary data on the same topic” (Creswell & Plano, 2007, p. 62). Triangulation involves both quantitative and qualitative collected during the same time frame. Each is given equal weight. Note: triangulation usually involves data from different types of sources (observations and interviews, for example, not merely interviews of several of the same type of interviewees).
  2. Embedded design. In embedded designs, data from one method serves a secondary role while the other provides the primary data.
  3. Explanatory designs are described as “two-phase” studies in which qualitative research data are used to explain quantitative findings. It is called “two-phase” because the quantitative are collected first and then qualitative data are collected in the second phase to help explain the quantitative data.
  4. Exploratory designs reverse the order of the explanatory. Qualitative data are collected first and help inform the second phase. An example would be when a concept is not sufficiently clear to serve as the basis for developing a good survey. Interviews are conducted and serve as the grounding for a subsequent survey to test the concept in a larger population.

As Creswell and Plano (2007) explain, several decisions are involved in choosing the appropriate mixed methods design. First, is the timing or sequence decision: does qualitative come before the quantitative or vice versa? Second is a decision about which data will be given priority or will “count” more toward answering the research questions. Then there is the “mixing decision.” How will the two datasets be “mixed” in answering the evaluation questions.  Will they be merged, embedded, and connected? In general, for comprehensive evaluations, mixed methods seem tooffer an appealing option.

Routine Monitoring of Outcomes

There are many types of outcomes or measures of performance that can be routinely monitored, such as attendance, participation, trends in ACT/GRE scores, retention, graduation rates, faculty and staff satisfaction, campus climate to name a few. Suskie’s student success measure lend themselves to monitoring. Routine monitoring studies are usually descriptive. To monitor outcomes, it is necessary to identify the outcomes and simple indicators of them so that data can easily and routinely be collected, recorded, and tracked—typically just descriptively. Modern software allows storage of enormous amounts of data and creation of “dashboards” to use for monitoring. Sometimes the indicators are obvious. If the goal of a program is to reduce the number of reported incidents of sexual misconduct, program administrators would track the number of reported incidents on a semester or yearly basis. However, to monitor student, faculty, and staff knowledge of how and where to report incidents of sexual misconduct, you would have to identify an indicator of “knowledge of how and where” that you would then track. You would do so by collecting measures of “knowledge” systematically at repeated times. Perhaps the indicator is level of knowledge as reported on an annual survey administered to a random sample of students. Many universities routinely collect data about the college experience from seniors that can then be used to monitor changes in a variety of experiences. National surveys such as the NSSE or CSSE (Community College Survey of Student Engagement) engagement benchmark indicators can be used for outcome monitoring when the survey is administered regularly and data on the benchmarks are tracked and compared from year to year.

Obviously, in order to be useful, indicators must be closely related to the outcome of interest, be able to be influenced by the program of interest, and must be consistent over time. Outcomes that are not easily quantifiable do not lend themselves to monitoring. Yet another caution about routine monitoring of outcomes. The fact that universities can store huge amounts of data is not helpful unless those data are meaningful to users and reported in ways that are easily interpretable (Middaugh, 2006; Suskie, 2009). Data dumps help no one to make better decisions!

To monitor outcomes, data must be routinely collected, stored, and analyzed. For this reason, outcomes that are monitored are usually fairly straightforward—participation, retention, graduation and perhaps more specific program level outcomes.

Building an Argument through Triangulation

Although one cannot overlook the importance of research designs in determining the degree to which a program impacts or causes the outcomes, it is possible to make arguments approaching causation absent sophisticated research designs. One way to do this is to have multiple sources of data pointing in the same direction about a program and its effects.  Multiple years of systematically collected data that say the same thing also provide additional support for drawing a connection between program and outcome (or lack thereof). For example, as limited as one-shot or pretest/posttest designs are, if you can demonstrate positive growth over years with different groups of participants in the same program using the same pre and posttests, you could likely make a good case that the program works. There’s no magical statistical tool to do this. It’s a matter of marshaling evidence from multiple sources to build a good, solid case.

Methods Choices: A Summary

With all of the options presented in this and the previous three chapters, how do you decide which design to use? Answering the following questions can help you decide.

  1. What is the purpose of your evaluation? What do you want to learn?
  2. From whom do you have data or from whom can you collect it?
  3. Has the intervention already occurred or can you administer the intervention after assigning participants to groups? That is, what degree of control do you have over assigning participants and to actual treatment and control groups.
  4. Who is the audience and for what purpose will data be used.
  5. How comfortable are you with the various research methods and accompanying statistical analysis methods? If you don’t have the necessary skills, do you have access to people who do?
  6. What time and resources are available to conduct the evaluation/assessment project?

The method you choose for a particular evaluation study will depend on many different factors beginning with what it is you most want to know. The various choices are summarized in the table below.

Method Comparison
Groups Design Methods Characteristics Conclusions
One group: participants only Descriptive Surveys, One-shot pretest/posttest

Direct methods

Qualitative interviews or focus groups

Observations. Goal-free

No manipulation by researchers (individuals are not assigned to treatment or control groups. What participants learned or gained.

 

Differences: Do some groups of participants achieve better outcomes than others

One group: participants Correlational Survey No manipulation by researchers (individuals are not assigned to treatment or control groups. Whether there are relationships between participant characteristics and outcomes.

Does one or more variables predict the outcome?

Pre/posttest, participants only Pretest and posttest Participants only Did posttest scores show increase or decrease.
Two or more groups participants and non-participants or participants in different versions of program Causal Comparative

 

(Usually involve descriptive and correlational as well.)

Surveys, Existing data,

Groups identified either by selecting for them in the data or by intentionally creating matching groups.

Studies conditions that already occurred.

Researcher has no control over the interventions.

Whether participation is related to outcome

Differences: do participants and non-participants differ on outcomes?

Does participation explain outcome?

Two or more groups can be intact existing groups Quasi-experimental Intact groups, one gets treatment, the other does not or gets different treatment;

Same measure of outcome for both

Individuals assigned to groups nonrandomly (e.g., intact classrooms but can manipulate treatment. Whether the treatment group does better than the no treatment group.

Does the treatment cause outcomes?

Two or more groups. Created for experiment. Experiments Random assignment of individuals to a treatment and control group.Same measures. Researchers randomly assign participants to groups. Administer treatment and control Whether the treatment group does better than the no treatment group.

Causation

Using Assessment and Evaluation for Equity

If you take nothing else away from this book it should be this: Assessment and evaluation are about asking questions about the programs and policies colleges and universities put in place, using sound methods to gather useful data to answer those questions, and using that information to improve how those programs and policies serve their constituents and society. It is incumbent on those in leadership positions to be conversant in the questions, methods, and appropriate use of assessment and evaluation results. This includes a responsibility as doers and users of assessment and evaluation to adopt an equity minded approach, to ask questions such as: What assumptions underly the questions an evaluation seeks to answer, the methods used to collect data, the data collected, and the findings? These assumptions provide the lens through which leaders interpret and use the methods of assessment and evaluation (quantitative and qualitative) to potentially obscure structural inequities and even perhaps unintentionally perpetuate them. Done well and with equity in mind, these very same processes are essential to illuminating—and to dismantling—inequities that prevent individuals and the colleges and universities that educate them from achieving their full potential.

 

 

License

Share This Book