14 Communicating Results
Key Topics
- Role of the audience on the nature of the evaluation report
- Important components of an evaluation report
- Reporting qualitative and quantitative data
Introduction
The goal of conducting evaluations is to provide useful information about a program to inform decisions. As I have said throughout the book, the two broad purposes that anchor the ends of the evaluation continuum are to inform improvements in program design, delivery and utilization, or outcome attainment and to make summative decisions about program effectiveness. To be useful, evaluations must be conducted systematically and their results recorded and communicated thoroughly, accurately, and thoughtfully in a written report. Even when you evaluate your own programs and you and your colleagues are the primary audience, it is important to formally record what was done and what was learned in some form. You never know when such reports could be useful. In fact, SLO assessment requires reporting. Academic as well as student affairs, success, and co-curricular units are expected to produce annual reports describing outcomes and changes made based on data collected. Such evaluation reports will and can be used as evidence in accreditation reports and for internally conducted reviews. The form of this documentation varies from rather formulaic and fairly brief annual reports of outcomes assessment activities to longer, more formal reports. In this chapter, I first discuss some factors to consider in shaping the evaluation report and then I spend the bulk of the chapter discussing typical sections of evaluation reports. The chapter assumes you are operating without a report template provided by the institution or unit within which you work. Depending on the audience, you may include some or all of the components discussed in this chapter or you might give some more attention than others.
Planning the Evaluation Report
When crafting any written report from an evaluation, one must start by knowing the purpose of the report. Evaluation findings (and the resulting report) can have a variety of uses, including the following: demonstrating accountability, providing a basis for improvement, showcasing a program’s strengths, informing program improvements, informing decision making about a program’s effectiveness (Fitzpatrick et al., 2004).
These varying purposes should be considered when writing the report. Depending on purpose, the report may be written slightly differently. It might have a different tone, use different kinds of language (technical vs. non-technical), and even a different emphasis.
Writing with Audience in Mind
One of the first principles of good writing is knowing and writing to your audience. I add a second principle: if conducting an evaluation for another unit, a report should understand and respond to the charge given to you as an evaluator. If commissioned by a senior administrator, what did they ask you to do, to produce, and for what purpose? Audiences will differ in the amount and type of information they want and need. If you are conducting an evaluation activity for the provost that will be used to inform efforts to improve retention, it is likely that you will write a formal report. On the other hand, annual assessment reports of department learning outcomes are often briefer and may be recorded in a structured template in an electronic content management system. I can safely say that most administrators will not want a dense, jargon filled, academic paper. They want a readable and useful report in which the findings, pop out. If writing the report for an evaluation sponsor other than oneself, it is the evaluator’s job to be sure that the report responds to the questions the sponsor has.
A second and related consideration for evaluation reports is the use to which findings and recommendations will be put. Not only does the form of a report differ depending on the purpose and use of the evaluation and the audience, but the language and framing of the findings may also vary. As Fitzpatrick et al. (2004) state, all reports should be complete, accurate, balanced and fair (p. 389). That said, evaluation has a political component to it (Mertens, 2010). You need to think about this carefully as you write a report. Evaluators should never be dishonest or deceptive when representing findings and recommendations. That said, knowing how findings will be used can influence how you present your findings, how you write a report, and where you place emphasis.
If you conducted an evaluation with formative purposes in mind, and in so doing encouraged interviewees to provide very honest, straightforward, constructive criticism, you want to be sure 1) that the recipient of the report clearly understands the intent and does not use the report as a platform for making summative decisions that could harm the people who participated in your interviews. 2) You will likely write your findings somewhat differently depending on the sponsor’s intent. You have an obligation to be honest but also to not put participants at risk.
Accreditation and program review self-studies offer a good example of several report writing dilemmas. Accrediting bodies encourage institutions to be honest and to recognize existing problems. The tendency to do otherwise is very strong. Accreditation is a high-stakes assessment that is both formative and summative. Top administrators want their accreditors, and their own faculty, staff, staff, and alumni to know about all of the good things that the institution has accomplished under their leadership. Conversely, they really don’t want to dwell on things that have not worked so well. Even though accrediting bodies typically want institutions or programs to recognize areas in need of improvement, acknowledging a program or institution’s weaknesses in a public report is difficult, especially if the stakes are high. That said, peer reviewers are quite good at detecting areas of weakness in their reviews.
Not recognizing clear problems runs the risk of creating a second problem: the institution does not recognize it has a problem! It is somewhat easier to recognize weaknesses in an internal report in which the stakes are low or in which administrators have created considerable trust that results, even negative ones, will be used constructively. Even so, unless you are sure the principal audience for your report can be trusted to use your report as intended, you may be hesitant to be too critical for fear that someone might hold the evaluation results against the program. Likewise, you may not want to be overly critical of your program if others are not doing the same. When doing program review, my colleagues are always concerned about how truly introspective and forthcoming they should be about program weaknesses. They assume that no one else is being truly honest and worry that any program that is completely honest would be punished somehow. These kinds of pressures often influence the way accreditation reports, program reviews, and even reports from internally conducted evaluations are written.
Common Sections of Evaluation Reports
Evaluation reports often contain the following sections:
- Executive summary
- Introduction
- Description of the program being evaluated (and/or the problem giving rise to a needs assessment)
- Description of evaluation method
- Results or findings
- Conclusions
- Recommendations
- Appendices
An evaluation report might or might not include a review of some related literature. I will say a bit more about each section.
Executive Summary
Most often written after the report is complete, the executive summary provides a concise summary of the evaluation, findings, and recommendations that can be read by busy administrators who may not have time to read the full report. The executive summary should be short—from one to a few pages, depending on the extent of the evaluation report itself. An executive summary should not be a repeat of the entire report but should briefly tell a busy reader the purpose of the evaluation, briefly describe the program and evaluation method, and briefly summarize the findings and recommendations.
Introduction
The introduction should set the stage for the rest of the report. An introduction might include a description of the program, the purpose of the evaluation (e.g., formative or summative, to conduct a needs assessment, etc.), who commissioned the evaluation and why, who the audience is, how the report is organized. How will results be used? You might spell out the questions your evaluation seeks to answer. The introduction should also provide the reader with a road map for the rest of the report. A description of some of these components follows.
Program Description
Any evaluation report will include a relatively detailed description of the program or intervention being evaluated. The description can be part of the introduction or a standalone section. What are the goals of the program? Who is served? What are the program activities? How is the program delivered and by whom? What resources does the program have? In a needs assessment, the description will be focused on the conditions or situation leading to needs assessment. The Workgroup for Community Health and Development identifies the following sections of a typical program description (Work Group, n.d). I have organized the components into introductory, contextual components and components of the logic model categories:
Introductory/contextual Components
- A statement of the program’s purpose or in the case of a needs assessment, what is the size and nature of the perceived problem or need being studied?
- Program stage of development. What is the program’s stage of development? Is it a new program? Has it been implemented fully? Can it reasonably expect to have assessable outcomes yet?
- Context. The program context could include any important environmental factors that could influence the program’s implementation and outcomes as well as factors surrounding the evaluation itself. (Workgroup, Chapter 36, Section 1, pp. 8-9)
Logic Model Components
- Expectations. What are the goals and intended outcomes of the program?
- Activities. These are the things the program does to bring about change.
- Resources. What human, material, and financial resources are available to carry out the activities?
- Intended short and long-term outcomes
If reporting the results of a needs assessment, the description will be focused on the conditions or situation leading to the need for a needs assessment. This may include a tentative description of the “problem” or what is believed to be the problem.
Purpose of the Evaluation
If not done so in other parts of the introduction, you should be sure to state what the purpose of the evaluation is and if you have specific guiding questions, you should state them. The introduction is a good place to do so. Readers should know what questions you set out to answer about the program being evaluated.
Description of Evaluation Method
If you have not identified the specific questions your evaluation seeks to answer in the introduction, you should state them in this section. In addition, this section includes a relatively detailed description of how you collected data, from whom, and how you analyzed it. Depending on the audience and level of detail desired in your report, a detailed description of method may be placed in an appendix with a very brief summary in the report itself with reference to the full method in the appendix.
Results or Findings
In this section, you report the evidence (data) you have gathered to answer the questions guiding your evaluation. This is the main section of the report. There are many ways to organize this section. One scheme might be to organize results by specific evaluation question. Again, depending on detail desired, you might include only the main findings in the body of a report and include more detailed versions in an appendix. Remember that the key is balanced and accurate. The way in which you display your findings or results will differ depending on the type(s) of data you have and the audience. Generally speaking, tables are used to summarize and effectively display large amounts of quantitative data. Similarly, themes supported by key quotes are typically identified for qualitative interviews. Some examples are provided later in the chapter.
Suskie (2015) is particularly blunt about writing for academic decision-making (as opposed to more formal academic writing for publication) and offers several recommendations. She says that “Every piece of evidence that is shared should help inform decisions…” (p. 177), but also that not all evidence need be reported (e.g., responses to every single survey item). She goes on to recommend that evidence be organized around specific points of data about which stakeholders want and need to know (p. 178). Moreover, “points of your evidence should pop out at readers, so they readily see the connection between your evidence and the decisions they are facing” (Suskie, p. 179). One way to deal with this is to include the full data in an appendix while focusing on the crucial points in the findings section.
Conclusions and Recommendations
This is an important section. This is where the evaluator summarizes the results and offers interpretations and recommendations. Busy readers often will go directly to this section. Conclusions should be tied to data presented. This does not mean that you simply repeat the findings. Owen (2007), drawing on Sonnichsen, makes a helpful distinction between findings, conclusions, and recommendations. Findings are descriptions of the actual data whereas conclusions and recommendations are judgments of those who conducted the evaluation based on the data. They are subject to interpretation. Anyone reading the evidence or data should see the same findings. Even with qualitative data, a reader ought to be able to see clearly how the data led to a particular conclusion even if different readers might reasonably come to different recommendations based on their interpretation of data.
Conclusions are statements of what evaluators and program administrators think the data plausibly mean and perhaps provide explanations for why the findings are what they are. In the case of evaluation, conclusions may involve judgments about whether and to what extent the program has met the standards set for it on the identified criteria. In writing this section, the author needs to strike a balance of not being overly judgmental or prescriptive but also explaining or actually pointing out to the report’s audiences the main takeaways from the data collection and analysis. Conclusions can be evaluative: the program is effective or not effective, for example, depending on what the sponsor of the evaluation wants.
Recommendations are the evaluator’s best judgments (often in consultation with program staff and knowledge of best practices or the literature) about what could or should be done based on the findings. As with conclusions, the recommendations should be tied to the findings and conclusions and provide informed ideas about how the evaluation sponsor can use the results of the evaluation study. I expand on this latter idea below.
As I mentioned earlier, the role of the evaluator varies depending on the charge given for the evaluation as well as the evaluator’s approach to evaluation. Some, particularly external evaluators, may view themselves as collecting data, drawing conclusions and making judgments independent of program participants. Others view themselves as facilitators, involving program participants at every step of the evaluation, including writing the report, interpreting the results and formulating recommendations. The rationale for the former is objectivity; the rationale for the latter is use of evaluation results.
A Common Mistake
A common problem in recommendation sections of evaluation or assessment reports occurs when authors make recommendations that are not tied findings generated. Program evaluators who make such recommendations apparently have something in mind they want to say before the evaluation is conducted. In this case, when evaluations don’t produce the data to support desired recommendations, report authors make them anyway. This is one reason it is so important to have clear questions and sources of data to answer the questions posed. One way to ensure that recommendations are tied to specific data or findings is to begin each recommendation with “Based on x, y, or z finding, the evaluators recommend a, b, or c. action.” That is, tie each recommendation to specific findings or groups of findings from your evaluation.
It is possible in the course of analyzing your data to identify overarching conclusions that are not tied to specific data. These might be called meta-conclusions or conclusions about the findings themselves. This is totally appropriate. Just be sure to distinguish how you came to these conclusions. Conclusions based on an absence of findings are one example. Perhaps at the end of your analysis activity you step back and say, “After reviewing all of this data, it appears that X is not mentioned at all and I would have expected it to be.” In this case absence of data is, in fact, data that ought to be accounted for.
Appendices
Appendices contain information to which the reader might need access but that doesn’t need to be in the body of the report. In fact, it might be distracting to include it in the report body. Reports are written for busy administrators. There are no rules for what goes here, and an evaluation report may not always have appendices. Items that are often found in appendices include copies of questionnaires and interview protocols, detailed data tables, detailed description of evaluation methods or findings, and documentation of IRB approval. There may be program documents that you want to include in an appendix.
Reporting Qualitative Data
The organizational challenge for presenting qualitative data is this: How to best to make sense of what is likely many pages of interview transcripts and then to organize and present findings? Experienced qualitative researchers have their own favored approach to tackling interview data, which may or may not involve using specific qualitative research software. The method used should following generally accepted good qualitative research practice and should be followed systematically. One way to begin is to start with each evaluation question and each interview question you asked to provide data to answer that evaluation question. Listen to your recordings and read your notes for answers to that one question. What ideas/themes come up repeatedly in response to each question you asked? Keep in mind that participants may use different words to express the same idea. It is your task to identify the core idea. Are there sub-categories or variations to their responses that fit into patterns or groups? Are there interesting ideas that don’t fit into the “common” pattern? What are they? How strongly did the respondents state them? They may be important ideas even though only one person said them.
It is important for the credibility of qualitative data, and your report, that you let the reader know that you are not selectively reporting only those ideas with which you agree or that make a program look good (or bad). One of many ways of establishing credibility of your data is by being balanced, specifically being sure to include evidence that may be counter to the general themes. That said, there is no one formula for approaching the writing task as there is for quantitative data.
Depending on the study, I have seen authors present findings by interview question, assuming there are not too many interview questions Or, it might make better sense to report your findings for each larger evaluation question. It is also important to note that you may well see themes or patterns that cut across all of your data and are not tied to a specific question.
When reporting qualitative data, you should identify the theme or pattern and then give a few quotes as examples. For example, “When asked what they mean when they say [University X] is a friendly place, two main themes emerged. First, students said that ____. Sally (a pseudonym), a first-year student from____, spoke for many of the students when she said that “____” Some students had a different take. For them friendly meant____ Josh’s observation that “____” is an example of these views.
Tables can also be used to summarize qualitative data, although their use is less frequent in qualitative research. An example of where a table might be useful when reporting qualitative data is a table summarizing the respondents and their characteristics. Given that few busy administrators have time to read lengthy qualitative finding sections, it behooves the evaluator to find ways to summarize and synthesize data in the report body while perhaps appending the more extensive data “findings” in an appendix.
Using Tables and Other Means to Display Quantitative Data
Quantitative data may be summarized and reported in table or graphic format with brief narrative explaining what the table represent and what they say. Tables are very effective for summarizing large amounts of information in a concise manner. The American Psychological Association (APA) provides guidelines for constructing tables such as the one included below. Text should accompany tables and highlight what is interesting about the table not simply repeat or explain every piece of data in the table. Below is a sample table. Use of tables does not absolve you, however, from writing text to explain what is in a table and what the table says.
Using Tables to Report Demographic Data
Typically, important tables go in the body of the report after the first time you mention them. For example, you might introduce this table as follows: “Overall 405 students responded to the survey. Those students were distributed among the SOE departments as shown in Table 14.1”
Department | Number | % of Total |
Curriculum and Teaching | 147 | 36.3 |
Ed Leadership and Policy Studies | 84 | 20.7 |
Health, Sport and Exercise Sciences | 66 | 16.3 |
Ed Psych and Research | 12 | 3 |
Special Ed | 31 | 7.7 |
Pre-education | 65 | 16 |
You might say this about the data in Table 14.1.: “The data in Table 14.1 suggest that most of the respondents from a survey about website use came from three of the five departments. It is not clear why there were so few respondents from PRE and SPED. Perhaps they did not receive the survey or perhaps they do not use the website and thus didn’t think of the survey as relevant.” Regardless, any interpretation of the findings must take into account that they are primarily based on the opinions of students from three departments. You do not need to repeat in words the specific data in the table.
Examples of Tables Summarizing Likert Data
The following table summarizes Likert data using means. The scale used was 1=very satisfied, 2=satisfied, 3=unsatisfied, 4=very unsatisfied, N/A=0.
Dimension | Number of Respondents | Mean | SD |
Overall Content | 359 | 1.96 | |
Academic department content | 350 | 1.96 | |
Other SOEHS dept. content | 350 | 1.95 | |
Student resources | 350 | 2.11 | |
Admissions | 350 | 1.86 | |
News and events | 350 | 1.97 | |
Faculty and staff | 348 | 2.11 | |
Careers | 350 | 2.11 |
Normally, one would report standard deviation information. SD is the standard deviation or the extent to which responses vary from the mean. If there is no variation that means that everyone gave the same answer. In this case the standard deviation is quite high suggesting that some people were satisfied and some were dissatisfied.
Alternatively, you could report Likert data as follows:
%Very Satisfied | % Satisfied | % Unsatisfied | %Very Unsatisfied | % N/A | Mean/SD | |
Student Resources | 16.4 | 52.9 | 15 | 5.8 | 9.7 | 2.11/2 |
Admissions | 22.3 | 54.3 | 8.1 | 1.1 | 14.2 | 3.4/1 |
You might do this more detailed table (14.3) if you think it is useful for your client to know how many or what percentage of respondents are satisfied and very satisfied (or unsatisfied and very unsatisfied). It is common to see Satisfied and Very Satisfied combined into one category and Unsatisfied and Very Unsatisfied similarly combined. You may want the breakdown by specific response category because the mean is an average. There are two ways to get to an average. All respondents can report virtually the same rating or some portion can report high levels of satisfaction while a significant proportion is highly dissatisfied and produce the same mean. Reporting standard deviation will tell you and the reader which of these scenarios is more accurate: everyone agrees or there is wide disagreement. The above table better illustrates how the responses are distributed.
Further, you could examine the data by department to see if the students from one department (or men and women or students of different socioeconomic statuses or racial/ethnic groups) are more satisfied than another. You can do this in one of two ways: you simply calculate the means by department or you run an ANOVA to compare the means of students in different departments. The first will let you simply “eye-ball” differences between groups, as in Table 14.4, while the latter can tell you whether any observed differences are statistically significant.
Each of these tables should be followed by a brief statement of what the table says. For example, “The data in Table 14.4 indicate that respondents from ELPS and PRE were more satisfied with website content than students from the other departments.” However, you do not need to, and should not, repeat every piece of data in a table. Identify and talk about the main takeaways. If the tables are constructed and labeled properly, the reader can see the results at a glance. (Note: 1=very satisfied, 4=very dissatisfied.)
Mean | SD | |
C&T | 1.94 | .72 |
ELPS | 2.24 | .75 |
HSES | 1.89 | .72 |
EPSY | 2.2 | .41 |
SPED | 1.62 | .62 |
Pre-Ed | 1.83 | .6 |
Displaying Assessment Results in Alternate Ways
Busy administrators do not have time to read long reports filled with dense tables and complex statistical analyses. They may prefer crisp, to the point summaries and presentations. The challenge is to present crucial data in a manner that is easily understood while having the intended impact on the recipient. Middaugh (2009) argues this is essential. Rigor matters little if no one will pay attention to the data. To this end, a host of computer programs have been developed to aid in data displays. Examples include Piktochart, Qualtrics, and Tableau dashboards, call-out boxes, sidebars, and bulleted lists (Suskie, 2009; 2015). Keep in mind that it is easy to make a pretty chart or infographic but that doesn’t in and of itself make the findings mean that it is communicating data effectively. I have seen a one-page infographic report so full of color and crammed with so much information that it was hard to see important points. In addition, the infographics and graphs are ultimately based on tables and raw data so it is important to learn how to organize raw data and organize tables before resorting to colorful charts.
That said, data presentations do not need to be complicated or even use Tableau or Piktochart in order to effectively communicate important points. For example, at one point NSSE used up or down arrows to call attention to areas for which benchmark scores had increased or decreased or to compare an institution’s benchmark scores to those of its peers. Detailed tables can go in an appendix.
Summary
The importance of the written report should not be overlooked. It is the main way in which the evaluation findings and recommendations will be recorded and communicated with stakeholders. Care should be taken to write to the audience, be balanced and accurate in reporting findings, and shape recommendations based on the findings.