I would hope there would be a variety of museum “types”.
But the note about jargon is a real issue for me. The language should be straightforward and not contrived.
Here is a little fun about what I mean:
115950217_3245495082163683_2161735501010600354_n.jpeg
allblogs
How We Used an Outcome Harvest
Recently, we at Three Hive Consulting used outcome harvesting as part of a developmental evaluation with an organization who builds connections and helps facilitate community change. As with most developmental and participatory techniques, using this method was a bit time intensive, but the results were worth it. Along the way, we realized that although there are research and examples about how to use this methodology, we wished we could find a real account of the ups and downs of implementing the methodology in a real world setting. Here we share how we used the methodology and what we wished someone had told us before we started.
What is outcome harvesting?

*Barbara Klugman, Claudia Fontes, David Wilson-Sánchez, Fe Briones Garcia,Gabriela Sánchez, Goele Scheers, Heather Britt, Jennifer Vincent, Julie Lafreniere, Juliette Majot, Marcie Mersky, Martha Nuñez, Mary Jane Real, NataliaOrtiz, and Wolfgang Richert
Source: https://www.betterevaluation.org/en/plan/approach/outcome_harvesting
To begin, let’s quickly review what outcome harvesting is. Outcome harvesting is a participatory evaluation methodology that was developed by Richard Wilson-Grau and colleagues*. In this methodology, change is monitored by collecting evidence of what has happened (gathering outcomes) and then looking back to understand how a program or intervention has contributed to these changes.
Outcome harvesting helps to understand what has happened due to actions taken in the past. It is particularly useful in programs or interventions which are targeting community- or population-level changes or in complex situations where the change seen in the beneficiaries cannot be directly tied back to one action or program or actor. It is also useful when the goals of a program or intervention are broad and flexible; and thus can be a helpful tool in developmental evaluations, where the actions and intended outcomes may change over the course of the evaluation. The findings of an outcome harvest can be used to understand how a program or initiative contributes to change and can be used as a planning tool to course-correct or modify program approaches.
Who is involved?
A successful outcome harvest is a participatory process. By involving those who have experienced change and those who can use the findings, the outcome harvest will be more successful. There are three groups of people who need to be involved in the outcome harvest.
-
Informants: The people who were part of or who witnessed the outcomes.
In our case, these were the partners that the community initiative worked with.
-
Harvest user: The person using the findings to make a decision or take action. They will help guide the approach used to ensure the data they need is collected. Sometimes there are multiple harvest users (e.g. funding organizations and the funding recipients who provide programming).
In our case, the organization and the evaluation sub-committee was the harvest user.
-
Harvester: the person(s) leading the outcome harvest. They support the process and suggest strategies to improve the credibility and reliability of the data.
That’s us! Our role was to consider how to make the data tools, collection, analysis, and reporting processes as credible and rigorous as possible.
How did we do it?

Steps in outcome harvesting: 1) Design the harvest, 2) Review documentation and draft outcomes, 3) Engage with informants, 4) Substantiate, 5) Analyse, interpret, 6) Support use of findings.
While Better Evaluation describes 6 main steps for outcome harvesting, in reality our approach had 4 simple steps.

Three Hive’s outcome harvesting steps: 1) Design, 2) Draft descriptions, 3) Expand and corroborate, 4) Analyze and use.
1. Design
In the design phase, the focus is on clarifying what the harvest user needs to know and how they want to use the information. This is basically the same as step 1 in the 6-step approach.
We asked: What activities, events, or programs has the organization contributed to? How has the organization contributed? What is the impact of these activities, events, or programs? Who did they impact?
2. Draft outcome descriptions
Typically, this is done by the harvester (evaluator) through document review. The organization we were working with was not a direct service provider and for many outcomes there was little documentation to review. So, we started with the organization rather than with the informants. We reviewed what documents were available, but also invited the organization to list activities, initiatives, and outcomes they helped to contribute to. The organization also provide us with the contact information for each of the partners (informants) who were involved.
We ended up with nearly 30 outcomes and almost 20 partners.
3. Expand and corroborate
Because we started with the organization drafting outcomes, the next step in our approach was to connect with the partners listed in step 2 to hear what they thought about the outcomes the organization generated.
In a short interview we asked the partners:
-
“What have you worked on with the organization?”
-
“The organization identified that you worked on X together, can you tell me a bit more about that. How did the organization contribute?”
-
“What was the significance of that event/activity/program/collaboration?”
-
“What impact do you think it had on you/the community?”
If the partners’ accounts differed from the outcome description, we made sure to probe further and seek additional sources of information. Some partners suggested additional partners who were also involved in the outcomes that we then reached out to.
4. Analyze and use
Finally, the data analysis and ensuing conversations were part of using the findings. We classified the outcomes based on the organization’s priority areas where they aimed to make change. In our case, collecting information about the outcomes was a tool in itself to engage the partners and have them reflect on what they have accomplished with the organization. Discussing the outcomes that were achieved also helped the organization to identify that they needed to do some further work to understand how their activities were linked to outcomes (i.e., logic modelling). We also recognized that the outcome harvest had not captured the community members’ perspectives on the outcomes. We used the stories of most significant change technique to gather participants’ perspectives on the activities and events and compared the outcomes and findings from the two techniques.
What did we learn?
Through this outcome harvest, we learned a lot of important lessons about using this method effectively.
-
Clear guiding questions set you up for success. As the harvester, communicate the strengths and limitations of this method with your harvest user to help them understand what questions to ask.
-
Use multiple sources of information to collect and substantiate your outcomes. In our case, multiple partners often contributed to a single outcome. Asking all of them about their experience rather than just one provided a range of perspectives.
-
Get creative with how you collect and verify outcomes. Most examples of substantiating outcomes used surveys, which seemed too impersonal and quantitative for the level of understanding we were hoping for. Instead, we set up short interviews with informants.
-
Get both the organization and the informants’ point of view on the outcomes. Understanding both perspectives enriched our data. We also found some unintended consequences and negative feedback, which helped to provide actionable results.
-
This method takes time. While the phone calls with informants only took 20 minutes, it took 6 weeks to reach out to all of them. Additionally, it took 2-3 weeks to draft outcome statements with the organization.
-
Generally, people like sharing about the work they have done. As an evaluator (or harvester), this was a rewarding experience and helped us to better understand the organization and how they work.
-
The outcomes are limited to those that the organization and informants identify. A more diverse pool of informants leads to a wider perspective about the outcomes. Ask your informants if there is anybody else they think you should be talking with.
-
Sometimes the process is more important than the final reveal. The act of harvesting outcomes and the ensuing conversations with informants and users can inspire more action than the final report.
Hopefully this article has given you another perspective on outcome harvesting. It can be a powerful methodology to understand complex situations. Don’t be afraid to make the modifications necessary to best suit your harvest users and informants.
Sign up for our newsletter
We’ll let you know about our new content, and curate the best new evaluation resources from around the web!
We respect your privacy.
Thank you!
Walking the talk
This blog post was originally posted in the American Evaluation Association July 2020 newsletter. Thank you to AEA for asking me to reflect on what the AEA values mean to me and how they guide my work.
This summer has been a time of reflection due to physical distancing, transitioning to remote teaching, and the growth in the national visibility of and support for the movement for Black lives. This time of reflection was a pause on “walking the talk” because I first had to grapple with what walking and talking meant to me personally. I had to sit down and think through what my personal values were and how I would embody them in my everyday life.
I have never explicitly stated or evaluated my personal values because I merely adopted the ones our white dominant society presented to me: civility, perfectionism, a sense of urgency, defensiveness, dichotomous thinking, paternalism, etc. As a white, heterosexual, cisgendered woman with a privileged academic background and job, it was easy to accept these as-is without reflection. It was comfortable. It was white supremacy.
The first challenge to my personal values occurred during my job talk for my now position at UW-Stout. A graduate student asked me, “How does social justice affect your evaluation practice?” My initial gut reaction was: it doesn’t. And sadly, that was true. I never thought explicitly about social justice beyond the idea that there was a fourth branch on Mertens and Wilson’s (2019) adaptation of the evaluation theory tree. That began a quest to bolster—and in some ways challenge—my formal education. I began reading, discussing these topics with friends and colleagues, and slowly adding what I learned to what and how I taught evaluation to students.
A year and a half later, George Floyd was murdered by police, and I left the comfort of my home to go protest in the little rural town I live in. I had always supported the Black Lives Matter movement, but never had I truly walked the talk until I stood at the corner of our biggest intersection with my sign. I began to realize how much easier it was to read books on antiracism than to practice antiracism.
After attending a couple demonstrations, I wanted to continue walking the talk. I wanted to help change the world and make a difference! I dove head-first into new projects and ideas, wrote and disseminated things, and realized quickly that I had re-submerged into the pool of the white dominant society’s values, not my own. I felt the urgency to get things done, to do things big, perfectly, and mostly by myself. I quickly became defensive when people who cared pointed out that my actions were the characteristics of white supremacy in action.
This was the juncture at which I realized I needed to pause, slow down, and reflect. So instead of walking the talk lately, I’ve been figuring out and explicitly determining my set of personal values, seeking inspiration alignment with scholars, friends, and colleagues, primarily those who are Black, indigenous, or people of color.
Some values were easy to determine, like valuing openness and transparency, which I embody through open science practices, sharing resources and research through my blog, and being honest with my students about what I am teaching and why. Other values have been far more difficult, presumably because they more directly challenge the white supremacist culture ideology I had internalized for so long. For example, the work of the Equitable Evaluation Initiative has emphasized to me that it is not enough to have diversity and inclusion if equity is not front and center.
As I continue to reflect on my values, I look in part to AEA’s values. There are some values that resonate deeply with me, such as having evaluations that are “high quality, ethically defensible, and culturally responsible”. There are others that I would modify, such as valuing equity in addition to diversity and inclusion. And there are yet others that I would add, such as valuing advocacy efforts on behalf of the evaluation field (AEA Professional Practice Competency 1.9).
More than anything, I am realizing that “walking the talk” and truly living out AEA’s values is an enduring process that is constantly evolving as I pause, slow down, reflect, and challenge my role in white supremacist culture.
After-Action Review: Learning Together Through Complexity

Complexity science is the study of how systems behave when under conditions of high dynamism (change) and instability due to the number, sequencing, and organization of actors, relationships, and outcomes. Complex systems pose difficulty drawing clear lessons because the relationship between causes and consequences are rarely straightforward. To illustrate, consider how having one child provides only loose guidance on how to parent a second, third or fourth child: there’s no template.
An After-Action Review is one such way to learn from actions you take in a complex system to help shape what you do in the future and provide guidance on what steps should be taken next.
The After-Action Review (AAR) is a method of sensemaking and supporting organizational learning through shared narratives and group reflection once an action has been taken on a specific project aimed at producing a particular outcome — regardless of what happened. The method has been widely used in the US Military and has since been applied to many sectors. Too often our retrospective reviews happen only when things fail, but through examining any outcome we can better learn what works, when, how, and when our efforts produce certain outcomes.
Here’s what it is and how to use it.
Learning Together

An AAR is a social process aimed at illuminating causal connections between actions and outcomes. It is not about developing best practice, rather it is to create a shared narrative of a process from many different perspectives. It recognizes that we may engage in a shared event, but our experience and perceptions of that event might be different and within these differences lie the foundation for learning.
To do this well, you will need to have a facilitator and a note-taker. This also must be done in an environment that allows individuals to speak freely, frankly, and without any fear of negative reprisals — which is a culture that must be cultivated early and ahead of time. The aim isn’t to point blame, but to learn. The facilitator can be within the team or outside the team. The US Military has a process where the teams undertake their own self-facilitated AAR’s.
To begin, gather those individuals directly involved in a project together in close proximity to the ‘end’ (e.g., launch, delivery of the product, etc..). As a group, reflect on the following three question sets:
- The Objectives & Outcomes:
- What was supposed to happen?
- What actually happened?
- Practice Notes: Notice whether there were discrepancies between perceptions of the objective in the first place and where there are differences in what people pay attention to, what value they ascribe to that activity (see below), and how events were sequenced.
- Positive, Negative, and Neutral Events:
- What created benefit / what ‘worked’ ?
- What created problems / what ‘didn’t ‘work’?
- Practice notes: The reason for putting ‘work’ in quotes is that there may not be a clear line between the activities and outcomes or a pre-determined sense of what is expected and to what degree, particularly with innovations where there isn’t a best practice or benchmark. Note how people may differ on their view of what a success or failure might be.
- Future Steps:
- What might we do next time?
- Practice notes: This is where a good note-taker is helpful as it allows you to record what happens in the discussion and recommendations. The process should end with a commitment to bringing these lessons together to inform strategic actions going forward next time something similar is undertaken.
Implementing the Method
Building AAR’s into your organization will help foster a culture of learning if done with care, respect and a commitment to non-judgemental hearing and accepting of what is discussed in these gatherings.
The length of an AAR can be anywhere from a half-hour to a full-day (or more) depending on the topic, context, and scale of the project.
An AAR is something that is to be done without a preconceived assessment of what the outcome of the event is. This means suspending the judgement about whether the outcome was a ‘success’ or ‘failure’ until after the AAR is completed. What often happens is successes and ‘wins’ are found in even the most difficult situations while areas of improvement or threats can be uncovered even when everything appeared to work (see the NASA case studies with the Challenger and Columbia space shuttle disasters).
Implementing an AAR every time your team does something significant as part of its operations can help you create a culture of learning and trust in your organization and draw out far more value from your innovation if you implement this regularly.
If your team is looking to improve its learning and create more value from your innovation investments, contact us and we can support you in building AAR’s into your organization and learn more from complexity.
If we cannot define “museums,” how do museums survive?
Last year, my colleagues and I chatted about the work of the International Council of Museums (ICOM) to propose a new definition of museums. We listened to the MuseoPunks podcast, which featured different speakers talking about their perspectives on the definition process. At the time, I remember being intrigued by the discussions. The old definition did not seem terrible to me. It struck me as bland and generic—similar to mission statements for most museums—but not erroneous:
“A museum is a non-profit, permanent institution in the service of society and its development, open to the public, which acquires, conserves, researches, communicates and exhibits the tangible and intangible heritage of humanity and its environment for the purposes of education, study and enjoyment.”
By comparison, the new definition is bold and aspirational even. I recall thinking it is a little long and full of jargon, but exciting:
“Museums are democratising, inclusive and polyphonic spaces for critical dialogue about the pasts and the futures. Acknowledging and addressing the conflicts and challenges of the present, they hold artefacts and specimens in trust for society, safeguard diverse memories for future generations and guarantee equal rights and equal access to heritage for all people. Museums are not for profit. They are participatory and transparent, and work in active partnership with and for diverse communities to collect, preserve, research, interpret, exhibit, and enhance understandings of the world, aiming to contribute to human dignity and social justice, global equality and planetary wellbeing.”
Fast forward to today, and I come to recognize the bigger issue behind the discussion of the old, bland definition and ICOM’s inability to confirm a new definition. We as a museum profession do not agree with what a museum is.
While there is some agreement on what a museum does—collect, preserve, educate—there is not consensus on the museum’s purpose. From our theoretical perspective at RK&A, purpose is the impact a museum has on people. What is the positive difference a museum makes in the lives of people?
I had taken it as a given that museums were in agreement that the impact of a museum on people is the purpose. And certainly there are many who feel impact on people is the essence of museums. For example, Smithsonian Secretary Lonnie G. Bunch III recently said in an American Alliance of Museums session Racism, Unrest, and the Role of the Museum Field that museums have put education foreward as their vision, noting the success of several museums because of their focus on education, conversation, and collaboration. More pointedly, Lonnie said, “I think the key is not to forget that we are of the community, of the people, and that our job is service first and foremost.”
Certainly there has been a shift in museums to focus on education. However, where I have been naïve is in thinking that all museums have made this shift fully. With any change, it is slow and there is resistance. This is what became clear to me as I read the President of ICOM’s resignation letter:
“Now it feels like we are becoming more and more self-centred, our minds occupied with self-interests, focused on our own sustainability rather than the sustainability of the whole which we are a part of. Can we have any relevance if we are so detached from the communities we want to serve?”
In our intentional practice work, the impact on audiences is a driving focus for a museum’s work. While we think that each museum should strive for its own unique impact based on what the museum considers to be their unique qualities, passions, and specific target audiences, there is an underlying assumption that museums want to impact public audiences. Museums are for the people.
A host of problems emerge if museum professionals cannot agree that museums are for public audiences. How can we consider our field to be professionalized without an agreed upon definition that drives our best practices and training? How can we expect individual museum professionals to carry out their work without this shared understanding? Most importantly, how can we expect individuals to value and support museums if they don’t really know what a museum is—because how could the public know what we are if we as professionals don’t agree on their purpose?

The post If we cannot define “museums,” how do museums survive? appeared first on RK&A.