16 Non-experimental Descriptive Designs

Key Topics

  • Describing participants and outcomes
  • Relating participant characteristics and outcomes
  • Surveys, tests, rubrics, and databases

Introduction

This chapter is focused on descriptive designs used when you have data from program participants only. The chapter includes appropriate methods of data collection and analysis, and the strengths and limitations of descriptive studies. Descriptive studies allow you to describe the level of outcome attainment and differences in attainment for different sub-groups of participants.

Key Terms:  A Refresher

Before I go on to discuss various research designs, it is useful to step back and review some commonly used terms. Quantitative studies involve variables, characteristics that participants bring with them into a program, characteristics of the program itself, and those that capture what participants take away from the programs (e.g., knowledge, skills, and attitudes). There are two particularly useful attributes of variables to be considered: Variable types and levels of measurement. To be a variable, there must be variation. For example, if the variable of interest is place of residence, then you have to have participants who live on and off campus in order for place of residence to be a variable. If all of your participants live on campus then whatever analysis you do is about on campus residents.

Types of Variables: Role

There are several important ways of classifying variables. The first involves the role they play in research and the second concerns how they are measured.

  • Independent variables those factors that one expects to act on or to influence the outcome. In evaluation research the intervention itself is considered an independent variable in causal-comparative, experimental and quasi-experimental studies as it is supposed to influence the outcome. But other types of variables might be considered independent variables, depending on the analysis—e.g., place of residence, race, gender, socioeconomic status, Pell eligibility.  In the studies we are discussing in this chapter, program involvement is a constant: Everyone is a participant. The next chapter, will introduce designs that will involve comparing results of different versions of the same program, outcomes from online an online section of a course vs an in person section, for example.
  • Dependent variables are outcomes that are influenced by the independent variables—e.g., grade point average, retention sense of belonging, workplace satisfaction.
  • Control variables, or covariates, are additional variables (e.g., gender, race, first-generation status) that may interact with the independent and dependent variables and be associated with the outcome. (Note: some variables can be an independent variable in one analysis, for example, workplace satisfaction, and a dependent variable in a different analysis. An intervention can contribute to workplace satisfaction. In other situations, workplace satisfaction can be factor that influences decisions to stay or leave the workplace.)

Types of Variables: Measurement

  • Categorical or nominal variables: responses are nouns— gender, religion, on or off campus, visa status.
  • Ordinal variables classify variables and rank them according to how much they possess the characteristic being measured—strongly agree to strongly disagree.
  •  Interval level variables—Values are numbers with equal intervals between points on scale, e.g., dollars, age, grade point average.
  • Ratio variables: Like interval level variables, but have a true zero point, height and weight, 90 pounds is twice that of 45 pounds.
  • Dichotomous: two responses—retained: yes or no.

    A computer program cannot determine whether a variable is an independent, dependent, or control variable. This is a decision you, the evaluator, must make based on the study, theory, other research, or common sense. Neither can the computer tell that a dichotomous categorical variable you have coded using numbers such as 1=resident and 2=non-resident is not an interval level measure or that, in this case, the numbers represent two groups. This is something you must keep in mind when you give a computer instructions to generate descriptive data. A computer can compute a mean for place of residence, but the mean would make no sense.

Population vs a Sample

Descriptive studies, particularly those involving survey research, are concerned about the difference between population and sample. A population is the entire group of individuals who meet certain characteristics of interest. In our example of UNIV101, all first year students would be the population whereas a sample is a selected subset of the entire group for a particular study. There are a variety of types of sampling strategies, but generally speaking, a random sample of individuals from the population of interest is typically assumed to be representative of the entire population.

Descriptive and Correlational Designs

As Mertler (2019) notes, descriptive designs allow one to describe the status of “individuals, settings, conditions, or events” (p. 95). As relevant for program evaluation, descriptive designs allow you to focus on how well the individuals who participate in your programs perform on outcomes of interest. These designs are appropriate for describing current settings and conditions to understand needs. They allow you to describe the characteristics, conditions, and outcomes of a group of people who participate in a program and to see if these factors are related to each other and/or to the outcomes. These characteristics make descriptive designs are good for conducting needs assessments and for operation and implementation assessments.

 With descriptive and correlational designs, you have no control group of non-participants and make no attempt to manipulate “individual, conditions, or events” (Mertler, 2019, p. 95). That is, no group is assigned to a treatment or control group or to a different version of a program. In assessment and evaluation, descriptive studies do not allow you to conclude that the program caused the outcomes.

Use of descriptive research is very common in needs assessments, operation and implementation assessment, and outcomes assessment. For outcomes assessment, the questions you can answer are who are your participants and how participants performed on outcomes of interest. In a needs assessment, descriptive designs allow you to describe the nature of the gap between the “real” and the “ideal” and needs of a target population; and in an operation/implementation study, they allow you to determine participant satisfaction with services, among other things.  Almost all student learning outcomes assessment is descriptive.

NOTE: There is a difference between descriptive research designs and descriptive statistics. The latter can be and are used to describe and correlate data from any quantitative study while descriptive studies as used in evaluation involve program participants only.

Descriptive Studies

The UNIV101 case example includes only students who enroll in the course, thus, your task is describe the students and the outcomes for those students. The questions of interest in our example are who the participants are and whether and to what extent students in UNIV101 achieved the desired outcomes and at what level.

Higher education assessment—especially student learning outcomes assessment—commonly involves collecting data from individuals who participated in a program (with no control group of non-participants) and so descriptive designs are appropriate. Moreover, in most cases of student learning outcomes assessment, it makes no sense to compare learning outcomes for students who were history majors with those who majored in some other subject. I would argue the same is true for typical student affairs, co-curricular, and student success programs. The immediate, specific outcomes from a recreation service program are very different from the Office of Money Management program outcomes described earlier in the book. This does not mean that descriptive studies involve no comparisons, as will be shown below. It simply means that comparisons made are within and among participants, for example, on-campus and off-campus students, full-time faculty compared to part-time.

Evaluators use descriptive statistics (described below) to describe participants and outcomes using frequencies, percentages, mean, median, mode, standard deviation. For the UNIV101 example, descriptive designs and associated methods can be used to answer questions such as:

  • What are the demographic characteristics of students who enrolled in and completed UNIV101? In other words, who takes UNIV101?
  • How did students in UNIV101 perform on the various outcome measures?
  • Do different sub-groups of UNIV101 students perform differently on outcome measures?
  • What percentage of students who take UNIV101 are retained to the next semester? What percentage graduate? How long does it take them to graduate?

Similarly, if you are gathering data for a needs assessment, you might ask the following questions:

  • What are the characteristics of the population affected by the problem or gap?
  • What are their needs? Do needs vary for different sub-populations?
  • How significant are the needs? What is the impact of the issue of need on student success?
  • How serious are their needs?

Descriptive questions to guide an operational/implementation study and also, assessment of student success:

  • Who participated and who didn’t?
  • How satisfied were participants with instruction and quality of materials?
  • What did participants like about the program and what improvements would they make?

   Designs involving pre and posttests of participants will be discussed in the next chapter.

Correlational Studies

Correlational study designs go one step further to address questions about the relationship between and among variables in a study and address questions about the ability of variables to predict an outcome for participants (Creswell & Creswell, 2018; Mertler, 2019). As Mertler notes “The basic design of a correlational study is straightforward. Scores on two or more variables of interest are collected for each member of a sample. The scores for the variables are correlated” (p. 103). This analysis tells you if status on one variable reflects status on another. Correlational research can also be used to predict future behavior based on what is known from current data.

In higher education research, many studies involve both description and correlation. Data from the variables in a study are first described and then statistically analyzed to see if they are related to each other, for example, to determine whether high school grade point average is related to performance in UNIV101 and how strongly?

As Mertler notes, correlational research designs (as distinct from correlation as a statistical procedure) involves “a single group of people who are quantitatively measured on two or more characteristics (i.e., variables) that already happened to them” (p. 101). In the UNIV101 example this could involve using data from UNIV101 students to determine the relationship between on-campus/off-campus residence and scores on the various outcome measures in UNIV101 OR whether high school grade point average is related to grade in UNIV101. The same kinds of data, often from surveys or databases, are used to answer descriptive and correlational questions.

Specifically, as related to the UNIV101 example, correlational research can answer questions such as:

  • Are student characteristics (e.g., high school grade point average, gender, race, place of residence) related to outcome scores?
  • How strongly are the variables related to each other?
  • Do student characteristics predict outcome scores for students who took UNIV101?
  • Does grade in UNIV101 predict retention to the next semester?

NOTE: An evaluation involving a single group can address correlation questions, but correlation is also a statistic that is often computed in a more complex studies. I will now turn to discussing surveys and other methods for collecting descriptive and correlational data and then describe statistics typically used to make sense of the data.

Methods of Data Collection

Data for descriptive and correlational studies come from a variety of sources. Please note that some of the methods described here can be used for more complex designs discussed in the next chapter.

Tests and Scales

Some of the measures captured for UNIV101 involve tests or quizzes. Tests are direct assessments of student learning. There is an entire sub-discipline in educational statistics devoted to test construction and measurement. Nationally normed tests such as the ACT, SAT, GRE conform to rigorous standards of validity and reliability while individual-instructor tests may not.  The quality of a measure, whether it be a knowledge test of a UNIV101 unit or a pre and post-test, is important to understanding outcomes of a program and what you can conclude about the program. In descriptive studies test and survey data are treated similarly for analysis purposes. Even essays can be coded for level of achievement and used as descriptive data.

Similarly, numerous scales have been developed to measure psychological or sociological constructs such as student engagement, sense of belonging, career engagement, to name just a few. Tests and scales can be used in descriptive and correlational studies and also in quasi-experimental and experimental studies when a comparison group is involved.

Surveys

In addition to tests and scales, data to answer the descriptive questions above and for other types of program assessment and evaluation in higher education will often come from one of two sources: surveys or from data about program participation and activities collected over time and recorded in a database. Surveys (the general method) and questionnaires (generally the specific data collection tool) and existing databases are very common methods of collecting and storing data in social science research and in program evaluation. As noted above surveys can be constructed to  capture the characteristics of the respondents from a sample or population and their attitudes, knowledge, opinions, behaviors, or experiences. Data from surveys are then described (for example, 30% of the respondents were first-generation college students, 40% reported learning a lot) or used to investigate relationships between and among variables, e.g., whether students place of residence (on or off campus) is related to levels of engagement. Data on variables of interest can be correlated to see if there is a relationship between them. Questionnaires can ask open-ended questions, but typically they are best suited to forced choice response items like those in a Likert scale item for which response options range from one to four or five.

Surveys often include one or more scales that measure attitudes, knowledge, and behaviors, for example sense of belonging, quiz scores on campus resources, and demographic information. These can be created by the evaluator or can include existing scales developed by someone else. Sense of belonging is a popular outcome used in higher education survey research for which existing measures exist. It makes little sense to invent one’s own sense of belonging measure if one already exists, especially if they have been shown to be valid and reliable. Quizzes may take a variety of forms but usually involve some sort of calculation of number of correct or wrong answers.

The efficacy of surveys is increased if one has a valid representative random sample of the population being studied or a large enough number of responses from a survey of an entire population (a class for example) to represent an entire population. Both allow you to generalize from a smaller group to a larger one and the latter allows you to say that the results reflect those of the population.

NOTE: Surveys can also provide data for causal comparative studies described in the next chapter.

Types of Surveys

There are several different types of survey designs with which you should be familiar.

Descriptive survey

A descriptive survey is a given at one point in time to capture a description of the respondents and their attitudes, opinions, etc. at that one point in time.  Database data for faculty, students, and staff can be used in the same way as survey data.

Cross-sectional survey

Examines the characteristics of different groups of people at one point in time (e.g., first year students and seniors).  In this kind of survey, it is common to compare the characteristics and responses of one group with another, for example, first year students with seniors. The National Survey of Student Engagement (NSSE) is often used to make claims about growth from first year to senior year when in fact the data are not from the same individuals collected twice as would be the case in a longitudinal study. Rather the data are collected at the same time from the two groups but are used as if they were longitudinal data. (For our UNIV101 example it makes no sense conceptually to compare first year students’ scores on UNIV101 with those of seniors.)

Longitudinal survey

This type of study follows one group of participants over time by collecting data periodically. There are also different approaches to a longitudinal study. (See Mertler, 2109, p. 99). Longitudinal studies are hard and take lots of time to complete and risk participant drop out. (Using a student database, you could track the progress of first year students who took UNIV101 over time, but it wouldn’t really make much sense to do so give the outcomes listed above.)

Strengths and Limitations of Survey Research

Survey research, or some variant of it, is a very common method used in higher education assessment and for needs assessment, implementation, climate assessment, in particular. It can also be used for outcomes assessment. Descriptive, survey-based research is an efficient way of collecting information from a particular group of people and of providing a picture of the respondents. The downside is that if the response rate is not sufficient, it is hard to generalize the findings to the larger population of people one is studying.

Questionnaires, the tool of much quantitative research, must be well written and be understandable to respondents. They must measure what you intend to measure and do so reliably. Once distributed, there is no mechanism for clarifying or changing question wording. Moreover, questionnaires (both questions and possible responses) are written from the perspective of the evaluator or the test/scale author and may not capture how respondents understand an experience. Additionally, the responses are self-reported perceptions. They typically are not based on observed behavior. For example, students could say that they understand ethical behavior; but from a survey you are not able to see them demonstrate it.

Surveys for assessment purposes often ask participants to rate their levels of knowledge and skills and/or to self-report growth in learning. They are participants’ estimations, their self-reports, of what they learned not measures of what they actually learned. There are pros and cons to using self-report data to be considered when determining assessment method. (See Porter, 2013.) However, relying on self-report data is often unavoidable, especially in student affairs and student success programs, and even in some co-curricular programs. Fraser and Wu (2016) argue that self-report data can be used as a proxy for outcomes when the right questions are asked. When using self-report data, it is always wise to state upfront that they are self-reported and that, although you understand the limitations, and to provide reasons why you used the data you did. In contrast, data from many psychological inventories and tests, such as a validated self-efficacy or sense of belonging scales, are considered to be actual measures of a concept.

Yet another concern about survey research is that of overuse and resulting survey fatigue. Surveys are cheap to administer electronically and are used for multiple purposes. Individuals may receive dozens of them annually. The result is that response rates often suffer. The response rate to a survey needs to be considered when interpreting findings.

Other Sources of Data for Descriptive Studies

Most of the student learning outcomes assessment methods generate data that are used descriptively. Instructors of UNIV101 could have recorded scores on student assignments over several semesters in a database. Data stored in databases such as one created specifically for UNIV101 or other student, faculty, alumni, or staff data, such as routine records kept by an office or in a larger college-wide database, can also be used to complete descriptive studies as noted above. Although not collected via survey, per se, the effect is the same. Typically, the data will include some demographic information as well as outcome data.

Observation is also a descriptive method. An activity is observed and participants are rated on the things such as presence or absence of certain behaviors, frequency of use, and level of performance.

As described in Chapter 10, student work could be also collected in portfolios or other authentic means and assessed using a rubric to grade submissions. For example, students’ exams can be graded using a rubric for which an instructor assigns a level of performance to the exam and its component parts. These levels of performance are assigned numbers which are tabulated for each student, recorded in an Excel spreadsheet, and treated like quantitative survey data for analysis purposes—results are reported as means and percentages. It is also possible to use observation guides to observe and record actual behaviors which are then counted and treated as quantitative data.

Analyzing Data: Descriptive Statistics

When a survey is administered and responses returned, the first thing the researcher does is download the data from the collection tool into an Excel-like spreadsheet that can be used with statistical software. Imagine a huge Excel spreadsheet with lots of data for each individual student enrolled in UNIV101 for each semester. That spreadsheet has a row for each student and a column cell for each of the demographic characteristics and performance data for each student (e.g., scores on the library assignment and quiz scores) and the semester the course was taken. The data in the cells must be numeric data in order for the computer to process it. So, numerical codes are assigned even when the data are categories such as freshman, sophomore, junior, senior, etc. The evaluator then uses common statistical software to generate frequencies, percentages, means, standard deviations for each of the variables in the assessment as appropriate for the variable.

One of the assessment activities in our hypothetical UNIV101 assessment is a survey in which students who take UNIV101 are asked to provide answers to several demographic questions (e.g., gender identity, intended major, place of residence, race, visa status, in-state/out-of-state, Pell eligibility) and items measuring outcomes such as a test of campus service knowledge and self-reported student engagement. The responses that are nominal variables (the answers are nouns), are coded using numbers so that the computer software can make sense of them. The responses for each person on each of these items is recorded in our Excel spreadsheet or dataset that would look something like the following:

ID Semester Gender Identity Pell Eligibility Res on-off Visa status

 

Knowledge  of

Campus services actual grade

0001  semester code 1 (Woman) 1 (yes) 1 (on) 1 (student visa) 7/10
0002 2 (Man) 2 (no) 2 (off) 2 (Domestic) 4/10
0003 3 (Non-Binary) 1 2 (off) 2 (Domestic) 5/10

    These data could also simply be stored in a routinely maintained database about students who take UNIV101. Perhaps quiz scores given in the course are routinely added for each student.

Note:

Typically, you will end up with two different types of numbers: numerical codes given to nominal or categorical variables (e.g., gender identity, place of residence) in a data spreadsheet so that a computer program can do something with them and  answers that are actual numbers (salary, grade point average, score on a quiz). Even though one would likely assign numbers to variables such as on-campus (1) and off-campus (2) so that statistical software can deal with them, it does not make sense to calculate a mean for gender, race, political party, place of residence, or religion of the students in UNIV101. For those variables, it makes more sense to calculate numbers and percentages. It does make sense to calculate a mean, median and standard deviation for scores on the library assignment, campus services quiz, and engagement scores.

Types of Questions and Statistics

Schuh et al, (2016, pp. 174-178) have a very useful way of talking about the different types of questions one can answer in a quantitative study and the corresponding statistical techniques. They group statistics into the following categories based on their purpose. Statistics can be used to:

  • Describe
  • Differ
  • Relate
  • Predict

    Each is briefly described below. Note: the statistics described below can be used when you have participants and non-participants however as applied in this chapter we are talking about using the statics to look at the participants in a program.

Describing the Data

With data on students enrolled in UNIV101 from a survey, test, or a database, it is possible to use descriptive statistics to determine the number and percentage of respondents in each demographic category who responded and also their scores on each outcome measure. For ordinal or interval demographic characteristics (age, ACT score) or engagement scale score, and scores on ordinal values such as those in a Likert scale, you would calculate and report data such as:

  • mean,
  • median,
  • mode
  • standard deviation as appropriate for each outcome.

For categorical variables, such as place of residence, you would calculate and report number and percentages of students living on campus and those living off campus, are domestic and on a student visa. Likert data (strongly agree-strongly disagree) are usually treated as interval data for analysis purposes as are test data. This means that one can calculate standard descriptive statistics such as mean, median, and mode using Likert scale data. However, you can also treat Likert scale items as categorical variables and compute number and percentages of respondents who give each response: strongly agree, agree, disagree, strongly disagree, for example.

Diving Deeper: Differ Questions

Evaluators are seldom satisfied with simply reviewing descriptive statistics for the data they have. They typically want to take a deeper look at their data. Overall frequencies, percentages, and mean scores can hide important sub-group differences. For example, you might want to know if students who live on campus have different outcomes scores than students who live off campus. In this case, one would identify the demographic variables of interest (and for which data are available) and ask the computer to tell you performance on each outcome of interest for each group. You can calculate the mean, median, and standard deviation of scores for those who live on and off campus to see if those who live on or off campus do better. The “eyeball method” involves literally looking at the data to see if one group is higher or lower or greater or less than the other(s).

Looks can be deceiving. To be more convinced about difference than the “eyeball test” tells you, you can statistically compare the two groups. To compare the outcomes for these groups, you use what Schuh et al., (2016) call differ statistics.  The specific statistic depends on the types of variables you have as follows:

  • If you have two nominal or categorical variables (residence—on or off-campus) and return for sophomore year (yes-no) you use chi-square.
  • If the independent (influencing variable) is nominal (residence—on or off campus) and the outcome (score on test) is interval, you would use a t-test
  • If two or more variables are nominal and the outcome is a test score or gpa (a rating or a number), you would use ANOVA.
  • If both the demographic and outcome variables of interest are categories (nouns), such as residence halls and visa status and retention (yes/no), you would use Chi-square. This statistic is not really a difference test. It tells you whether frequencies observed are greater or less than expected frequencies for the two variables of interest.
  • If both variables are interval or ordinal data (ACT score and quiz score), you would use correlational statistics and also bivariate regression.

The statistics mentioned above are typically called inferential statistics. They are used to make inferences about a population by knowing data from a sample. Statisticians would likely argue that if you have a population—all students who took UNIV101, any observed differences in scores or percentages between subgroups of participants are real differences. There would be no reason to employ inferential statistics to determine whether observed differences between resident and non-resident students, for example, are real (greater than they would be by simple chance). However, inferential statistics are often used in cases like UNIV101 under the assumption that one rarely has true population data.

Correlation: Relate Statistics

A very common type of analysis done in descriptive research and in evaluation studies involves asking whether some variables in your data are related to each other and how strongly. (Again, participants only.) To use the UNIV101 example, you could ask whether high school grade point average is related to scores on one of the UNIV101 outcome measures. To do a correlational study you can use data from a survey, from test data collected from a group of participants, or data from a large dataset of participants collected over time. As Mertler (2016) explains, “the word relationship means than an individual’s status on one variable tends to reflect (i.e., is associated with) his or her [sic] status on another variable” (p. 101). Correlational analyses as described here can also be used to identify how one or two or more variables predict an outcome.

The type of variables one has will determine the appropriate relate statistics (Schuh et al., 2016) to use.

  • When one has two interval level or ordinal level variables, a common statistical test involved in correlation is the Pearson r.
  • Simple and multiple regression can be used.
  • When you have two categorical variables, such as visa status and place of residence, the Chi-square measure of association can be used. One categorical and one interval variable call for more non-standard measures of association.

   Correlation is easiest to visualize with interval or ordinal data—ratings or scores. Typically, a positive relationship means that variables change in the same direction: If scores on one variable go up, they go up on the second as well. And, somewhat counterintuitively, if scores go down on one variable and go down on the second, this can also be a positive correlation, just in a negative direction. It’s confusing. A negative relationship means high scores on one variable and low on the other. Correlations are nice because they show the direction and strength of a relationship. See Mertler (2016) or Salkind and Frey (2020).

NOTE: for many quantitative studies, it is appropriate to first describe the sample and then to run correlational analyses before engaging in any more complex statistical analysis. The reason for this is that if two variables are very highly correlated, they may be measuring the same thing and will distort more complex analyses involving both.

Predict Statistics

Mertler (2019) notes that that correlation studies can use data on current behavior to predict future behavior. He calls these predictive correlational studies (p. 101). For UNIV101, it would be possible to ask whether place of residence predicts UNIV101 outcomes. Alternatively, one can ask how much each of a series of variables (place of residence, high school grade point average, and visa status) contributes to or predicts grade in UNIV101. Predict statistics, are not typically used in unit level outcomes assessment projects in higher education. They are often used at a larger university-wide scale to answer questions such as what factors contribute to faculty salaries or to determine what combination of variables best predicts timely graduation.

Linear and logistic regression are typically the “predict” statistics of choice to when predicting the outcome. Regression is often, and more easily, done with ordinal and interval level data. However, you can use categorical variables as influencing variables but you must “dummy” them or convert them to numbers. It’s more complicated. If your dependent, outcome variable is categorical (for example, retained or not) then you use logistic regression.

Multiple regression can tell you how much each independent variable (and control variable) contributes to the outcome variable—quiz score, for example. Or it can help you identify the set of independent (predictor) variables that best predict the quiz score. It can also allow you to examine the power of one independent variable, say Pell eligibility to explain the outcome variable of interest, controlling for or holding constant, other factors that might interact with Pell eligibility to affect the outcome. This kind of analysis can also predict future outcomes based on what is known about other variables in an analysis.  So, for example, if ACT and high school grade point average are shown to statistically predict outcomes of UNIV101, it simply means that if one knows a student’s ACT and high school grade point average, one can predict the how the student will do in UNIV101.

Strengths and Limitations of Descriptive and Correlational Research

Descriptive and correlational research is very common in higher education. Data from one survey or database will typically allow the investigator to answer both descriptive, correlational questions, and maybe even predict questions. This is a strength. In addition, it’s relatively easy to collect and analyze descriptive data. Ideally, the measures of the variables involved must be valid and reliable. In other words, they must measure what they purport to measure (validity) and do so consistently (reliability). Surveys and tests are useful if you have a captive audience to complete them (as in the UNIV101 example). Otherwise, it is especially challenging to get students, faculty, and staff to fill them out, often resulting in low response rates. Data to answer many descriptive questions may already be available in an institutional data source thus avoiding the concern about low response rates. A downside of using existing data is that you can only answer questions for which data exist. Another limitation of surveys is that the intended audience must understand the survey items as you intend. This makes writing good questionnaires very important.

Surveys, in particular, rely on heavily on self-report data rather than on observed actual performance. (The exception is for those scales and psychological tests deemed to measure the underlying construct itself.

Correlation and Prediction Are Not Causation

Two variables can be related, and one can even predict the other, but that does not mean that one caused the other even when the correlation is significant and strong. For example, family income may be highly and positively correlated to scores on the UNIV101 library assignment (when family income goes up so too do scores on the library assignment), but one can’t say that family income caused the library assignment score. There may be many other influential factors affecting library assignment score, and you do not have a comparison group of individuals who did not participate in UNIV101.

Don’t Forget the Descriptive Data

While it is tempting to focus on “differ,” and “predict” questions, it is important to look carefully at simple descriptive statistics to know who completed the survey or test activities and how well your program is achieving its stated outcomes. For example, what are the item means from a survey? Are they high, medium, or low? Or what percentages of participants meet your goal? No program wants to do badly in any area.  Simple statistics such as means and percentages will tell you how well you are doing. Statistically significant differences just answer the question of whether the difference in outcomes for participant with different characteristics is statistically meaningful but say nothing about whether your participants are actually performing at desired levels on outcomes.

Summary

Descriptive and correlational designs are useful when you have data from participants in a program but do not have a comparison group who did not participate. Such studies typically involve information collected from surveys, large databases, or from some other form of data, such as records of rubric scores. Over the course of your career as an administrator, you will have many, many opportunities to read reports using descriptive designs and descriptive statistics. You will do such studies yourself so it is particularly important that you become familiar with descriptive studies and descriptive statistics.

 

License

Share This Book