• Skip to main content
  • Skip to footer
  • Home

The May 13 Group

the next day for evaluation

  • Get Involved
  • Our Work
  • About Us
You are here: Home / Archives for cplysy

cplysy

Aug 07 2020

Evaluation Roundup – July 2020

 

Welcome to our July roundup of new and noteworthy evaluation news and resources – here is the latest.

Have something you’d like to see here? Tweet us @EvalAcademy or connect on LinkedIn!


New and Noteworthy — Reads


International Program for Development Evaluation Training – Evaluation Hackathon

For those of you on Twitter, you likely have been following IPDET’s Evaluation Hackathon. The Evaluation Hackathon took place from July 7-13 and was “a playground for creative individuals from around the world to unite their skills, knowledge and inspirations to find creative solutions to challenges of our times.” These solutions are ones that might help to empower the field of evaluation. Check out all the cool ideas on the project page.   

Capacity4dev – Evaluation in Crisis

Capacity4dev is the European Commission’s platform for sharing information related to International Cooperation and Development. Its Evaluation Support Service team created the DEVCO/ESS Evaluation in Crisis Initiative. This initiative curates resources (documents, webinars, videos, blogs and podcasts) to help evaluators evaluate in crisis. Some of the topics covered include: How do we need to adapt our processes to move quickly? What data collection techniques are best suited in a crisis situation? Do we need to review our evaluation ethics? How do we check facts when using remote techniques? Can we still contribute to sustainability and if so, how?

Eval Forward – Evaluation in Times of COVID-19

If you are looking for more insights about evaluation during times of crisis then check out Eval Forward’s three-part blog series that describes reflections from leaders and managers currently engaged in humanitarian-development evaluations. Evaluation leaders from Action Against Hunger and the World Food Programme were interviewed and asked to reflect on how the pandemic is affecting the practice of evaluation and what they think it will mean for evaluation going forward. Interestingly, some speculated on a greater mix of national and international evaluators on evaluation teams as a result of COVID-19 (check out Engage R+D’s report mentioned below for why this is so important in our field.)

Engage R + D – Listening for Change: Evaluators of Color Speak Out About Experiences with Foundations & Evaluation Firms

In Engage R+D’s Listening for Change learning brief they state, “foundation staff and evaluators tasked with planning and assessing social change efforts do not reflect the demographics and cultures of the communities they serve” – there needs to be more attention to diversity, equity and inclusion (DEI). To make progress when it comes to DEI, we need to start by listening to ideas, insights and experiences of professionals of color. The learning brief reports on four key themes on what it will take to support leaders of color in philanthropic evaluation: 1) Outreach is key to opening a career pathway; 2) Attitudes and dynamics in the workplace affect retention of evaluators of color; 3) Demonstrated commitment to DEI attracts evaluators of color to evaluation firms and clients, and; 4) Employers have an active role to play in retaining staff.

Stanford Social Innovation Review – Ten Reasons Not to Measure Impact and What to do Instead

While we’re talking about ways to transform the evaluation field, let’s talk more about impact evaluation. There is a continued push for more and more impact measurement; however, this is not always appropriate and even problematic in a lot of circumstances. In this article, Mary Kay Gugerty and Dean Karlan outline the ten reasons or circumstances not to measure impact and the alternatives that can be adopted instead. Ultimately these reasons fall into four categories: 1) Not the right tool; 2) Not now; 3) Not feasible, and; 4) Not worth it.


New and Noteworthy — Tools


EvaluATE – Key Resource by Evaluation Topic

EvaluATE has many resources on its site; however, we all know clicking through and navigating to numerous resources can quickly lead you wondering where you are and how you got there. Instead, EvaluATE has compiled its resources into one PowerPoint file, organized according to evaluation topic areas, so you can quickly navigate to the resources you need. Topic areas include Finding and Selecting an Evaluator, Integrating Evaluation in Proposals, Getting Started with Evaluation, Evaluation Design, Data Collection and Analysis, and Reporting and Use.

Inspiring Impact – Review your existing data worksheet

Inspiring Impact created a worksheet that outlines a step-by-step process to help review data. The worksheet is helpful in determining what information you should continue to collect, what to stop collecting and what to start collecting. The worksheet is available in both Word and Excel formats.

Khulisa Management Services – Visual Methodologies in Evaluations

My favourite part of being an evaluator is when I can combine my analytical and creative sides – (so much so I coined the term Evalucreator!) For all you Evalucreators out there, check out this deck on visual methodologies and how to incorporate them into your evaluation practice.

DC Fiscal Policy Institute – Style Guide for Inclusive Language

We’ve mentioned DEI above in our New and Noteworthy reads. There are steps you can take for greater DEI when you write. This style guide provides guidelines for ways that we can employ inclusive language and integrate a racial equity lens in our writing. While the guide is targeted for the DC geography, it still provides useful terms and principles that can be applied in different settings.


New and Noteworthy — Courses, Events and Webinars


August 2020

Claremont Graduate University – The Evaluator’s InstituteA variety of courses that are being conducted by various instructors, including some big names like Michael Quinn Patton, Ann K. Emery, and Ann Doucette.

Australian Evaluation Society – Fundamentals of Good Evaluation Reporting and PracticeFacilitator: Anne Markiewicz

Date and Time: August 17 & August 24; 9:30am – 11:00am AEST

Venue: Online 

September 2020

Evalpalooza I: Evaluation Failures with Kylie Hutchinson and Thought Leaders

Presenters: Kyle Hutchinson & Libby Smith

Date and Time: September 24; 12pm CDT

Venue: Online


We have a free guide:

Program Evaluation Scoping Guide

This is a free digital download. The guide outlines questions evaluators can ask program managers or other stakeholders to better understand the scope of the program and its evaluation. The questions in the guide are intended to help evaluators begin formulating a quote and/or an evaluation plan; however, it can also be used identify disagreements or gaps in what is known about the program and/or the boundaries of the evaluation.



 

Written by cplysy · Categorized: evalacademy

Aug 07 2020

Comment on If we cannot define “museums,” how do museums survive? by Diane

I would hope there would be a variety of museum “types”.
But the note about jargon is a real issue for me. The language should be straightforward and not contrived.
Here is a little fun about what I mean:
115950217_3245495082163683_2161735501010600354_n.jpeg

Written by cplysy · Categorized: rka

Aug 05 2020

How We Used an Outcome Harvest

 

Recently, we at Three Hive Consulting used outcome harvesting as part of a developmental evaluation with an organization who builds connections and helps facilitate community change. As with most developmental and participatory techniques, using this method was a bit time intensive, but the results were worth it. Along the way, we realized that although there are research and examples about how to use this methodology, we wished we could find a real account of the ups and downs of implementing the methodology in a real world setting. Here we share how we used the methodology and what we wished someone had told us before we started.

 

What is outcome harvesting?

*Barbara Klugman, Claudia Fontes, David Wilson-Sánchez, Fe Briones Garcia,Gabriela Sánchez, Goele Scheers, Heather Britt, Jennifer Vincent, Julie Lafreniere, Juliette Majot, Marcie Mersky, Martha Nuñez, Mary Jane Real, NataliaOrtiz, and Wolfgang Richert

Source: https://www.betterevaluation.org/en/plan/approach/outcome_harvesting

To begin, let’s quickly review what outcome harvesting is. Outcome harvesting is a participatory evaluation methodology that was developed by Richard Wilson-Grau and colleagues*. In this methodology, change is monitored by collecting evidence of what has happened (gathering outcomes) and then looking back to understand how a program or intervention has contributed to these changes.

Outcome harvesting helps to understand what has happened due to actions taken in the past. It is particularly useful in programs or interventions which are targeting community- or population-level changes or in complex situations where the change seen in the beneficiaries cannot be directly tied back to one action or program or actor. It is also useful when the goals of a program or intervention are broad and flexible; and thus can be a helpful tool in developmental evaluations, where the actions and intended outcomes may change over the course of the evaluation. The findings of an outcome harvest can be used to understand how a program or initiative contributes to change and can be used as a planning tool to course-correct or modify program approaches.

Who is involved?

A successful outcome harvest is a participatory process. By involving those who have experienced change and those who can use the findings, the outcome harvest will be more successful. There are three groups of people who need to be involved in the outcome harvest.

  1. Informants: The people who were part of or who witnessed the outcomes.

    In our case, these were the partners that the community initiative worked with.

  2. Harvest user: The person using the findings to make a decision or take action. They will help guide the approach used to ensure the data they need is collected. Sometimes there are multiple harvest users (e.g. funding organizations and the funding recipients who provide programming).

    In our case, the organization and the evaluation sub-committee was the harvest user.

  3. Harvester: the person(s) leading the outcome harvest. They support the process and suggest strategies to improve the credibility and reliability of the data.

    That’s us! Our role was to consider how to make the data tools, collection, analysis, and reporting processes as credible and rigorous as possible.

 

How did we do it?

Steps in outcome harvesting: 1) Design the harvest, 2) Review documentation and draft outcomes, 3) Engage with informants, 4) Substantiate, 5) Analyse, interpret, 6) Support use of findings.

Steps in outcome harvesting: 1) Design the harvest, 2) Review documentation and draft outcomes, 3) Engage with informants, 4) Substantiate, 5) Analyse, interpret, 6) Support use of findings.

While Better Evaluation describes 6 main steps for outcome harvesting, in reality our approach had 4 simple steps.

Three Hive’s outcome harvesting steps: 1) Design, 2) Draft descriptions, 3) Expand and corroborate, 4) Analyze and use.

Three Hive’s outcome harvesting steps: 1) Design, 2) Draft descriptions, 3) Expand and corroborate, 4) Analyze and use.

1.    Design

In the design phase, the focus is on clarifying what the harvest user needs to know and how they want to use the information. This is basically the same as step 1 in the 6-step approach.

We asked:  What activities, events, or programs has the organization contributed to? How has the organization contributed? What is the impact of these activities, events, or programs? Who did they impact?

2.    Draft outcome descriptions

Typically, this is done by the harvester (evaluator) through document review. The organization we were working with was not a direct service provider and for many outcomes there was little documentation to review. So, we started with the organization rather than with the informants. We reviewed what documents were available, but also invited the organization to list activities, initiatives, and outcomes they helped to contribute to. The organization also provide us with the contact information for each of the partners (informants) who were involved.

We ended up with nearly 30 outcomes and almost 20 partners.

3.    Expand and corroborate

Because we started with the organization drafting outcomes, the next step in our approach was to connect with the partners listed in step 2 to hear what they thought about the outcomes the organization generated.

In a short interview we asked the partners:

  • “What have you worked on with the organization?” 

  • “The organization identified that you worked on X together, can you tell me a bit more about that. How did the organization contribute?”

  • “What was the significance of that event/activity/program/collaboration?”

  • “What impact do you think it had on you/the community?”

If the partners’ accounts differed from the outcome description, we made sure to probe further and seek additional sources of information. Some partners suggested additional partners who were also involved in the outcomes that we then reached out to.

4.    Analyze and use

Finally, the data analysis and ensuing conversations were part of using the findings. We classified the outcomes based on the organization’s priority areas where they aimed to make change. In our case, collecting information about the outcomes was a tool in itself to engage the partners and have them reflect on what they have accomplished with the organization. Discussing the outcomes that were achieved also helped the organization to identify that they needed to do some further work to understand how their activities were linked to outcomes (i.e., logic modelling). We also recognized that the outcome harvest had not captured the community members’ perspectives on the outcomes. We used the stories of most significant change technique to gather participants’ perspectives on the activities and events and compared the outcomes and findings from the two techniques.

What did we learn?

Through this outcome harvest, we learned a lot of important lessons about using this method effectively.

  1. Clear guiding questions set you up for success. As the harvester, communicate the strengths and limitations of this method with your harvest user to help them understand what questions to ask.

  2. Use multiple sources of information to collect and substantiate your outcomes. In our case, multiple partners often contributed to a single outcome. Asking all of them about their experience rather than just one provided a range of perspectives.

  3. Get creative with how you collect and verify outcomes. Most examples of substantiating outcomes used surveys, which seemed too impersonal and quantitative for the level of understanding we were hoping for. Instead, we set up short interviews with informants.

  4. Get both the organization and the informants’ point of view on the outcomes. Understanding both perspectives enriched our data. We also found some unintended consequences and negative feedback, which helped to provide actionable results.

  5. This method takes time. While the phone calls with informants only took 20 minutes, it took 6 weeks to reach out to all of them. Additionally, it took 2-3 weeks to draft outcome statements with the organization.

  6. Generally, people like sharing about the work they have done. As an evaluator (or harvester), this was a rewarding experience and helped us to better understand the organization and how they work.

  7. The outcomes are limited to those that the organization and informants identify. A more diverse pool of informants leads to a wider perspective about the outcomes. Ask your informants if there is anybody else they think you should be talking with.

  8. Sometimes the process is more important than the final reveal. The act of harvesting outcomes and the ensuing conversations with informants and users can inspire more action than the final report.


Hopefully this article has given you another perspective on outcome harvesting. It can be a powerful methodology to understand complex situations. Don’t be afraid to make the modifications necessary to best suit your harvest users and informants.  


Sign up for our newsletter

We’ll let you know about our new content, and curate the best new evaluation resources from around the web!


We respect your privacy.

Thank you!


 

Written by cplysy · Categorized: evalacademy

Aug 04 2020

After-Action Review: Learning Together Through Complexity

Complexity science is the study of how systems behave when under conditions of high dynamism (change) and instability due to the number, sequencing, and organization of actors, relationships, and outcomes. Complex systems pose difficulty drawing clear lessons because the relationship between causes and consequences are rarely straightforward. To illustrate, consider how having one child provides only loose guidance on how to parent a second, third or fourth child: there’s no template.

An After-Action Review is one such way to learn from actions you take in a complex system to help shape what you do in the future and provide guidance on what steps should be taken next.

The After-Action Review (AAR) is a method of sensemaking and supporting organizational learning through shared narratives and group reflection once an action has been taken on a specific project aimed at producing a particular outcome — regardless of what happened. The method has been widely used in the US Military and has since been applied to many sectors. Too often our retrospective reviews happen only when things fail, but through examining any outcome we can better learn what works, when, how, and when our efforts produce certain outcomes.

Here’s what it is and how to use it.

Learning Together

An AAR is a social process aimed at illuminating causal connections between actions and outcomes. It is not about developing best practice, rather it is to create a shared narrative of a process from many different perspectives. It recognizes that we may engage in a shared event, but our experience and perceptions of that event might be different and within these differences lie the foundation for learning.

To do this well, you will need to have a facilitator and a note-taker. This also must be done in an environment that allows individuals to speak freely, frankly, and without any fear of negative reprisals — which is a culture that must be cultivated early and ahead of time. The aim isn’t to point blame, but to learn. The facilitator can be within the team or outside the team. The US Military has a process where the teams undertake their own self-facilitated AAR’s.

To begin, gather those individuals directly involved in a project together in close proximity to the ‘end’ (e.g., launch, delivery of the product, etc..). As a group, reflect on the following three question sets:

  1. The Objectives & Outcomes:
    • What was supposed to happen?
    • What actually happened?
    • Practice Notes: Notice whether there were discrepancies between perceptions of the objective in the first place and where there are differences in what people pay attention to, what value they ascribe to that activity (see below), and how events were sequenced.
  2. Positive, Negative, and Neutral Events:
    • What created benefit / what ‘worked’ ?
    • What created problems / what ‘didn’t ‘work’?
    • Practice notes: The reason for putting ‘work’ in quotes is that there may not be a clear line between the activities and outcomes or a pre-determined sense of what is expected and to what degree, particularly with innovations where there isn’t a best practice or benchmark. Note how people may differ on their view of what a success or failure might be.
  3. Future Steps:
    • What might we do next time?
    • Practice notes: This is where a good note-taker is helpful as it allows you to record what happens in the discussion and recommendations. The process should end with a commitment to bringing these lessons together to inform strategic actions going forward next time something similar is undertaken.

Implementing the Method

Building AAR’s into your organization will help foster a culture of learning if done with care, respect and a commitment to non-judgemental hearing and accepting of what is discussed in these gatherings.

The length of an AAR can be anywhere from a half-hour to a full-day (or more) depending on the topic, context, and scale of the project.

An AAR is something that is to be done without a preconceived assessment of what the outcome of the event is. This means suspending the judgement about whether the outcome was a ‘success’ or ‘failure’ until after the AAR is completed. What often happens is successes and ‘wins’ are found in even the most difficult situations while areas of improvement or threats can be uncovered even when everything appeared to work (see the NASA case studies with the Challenger and Columbia space shuttle disasters).

Implementing an AAR every time your team does something significant as part of its operations can help you create a culture of learning and trust in your organization and draw out far more value from your innovation if you implement this regularly.

If your team is looking to improve its learning and create more value from your innovation investments, contact us and we can support you in building AAR’s into your organization and learn more from complexity.

Written by cplysy · Categorized: cameronnorman

Aug 04 2020

If we cannot define “museums,” how do museums survive?

Last year, my colleagues and I chatted about the work of the International Council of Museums (ICOM) to propose a new definition of museums.  We listened to the MuseoPunks podcast, which featured different speakers talking about their perspectives on the definition process.  At the time, I remember being intrigued by the discussions.  The old definition did not seem terrible to me.  It struck me as bland and generic—similar to mission statements for most museums—but not erroneous:

“A museum is a non-profit, permanent institution in the service of society and its development, open to the public, which acquires, conserves, researches, communicates and exhibits the tangible and intangible heritage of humanity and its environment for the purposes of education, study and enjoyment.”

By comparison, the new definition is bold and aspirational even.  I recall thinking it is a little long and full of jargon, but exciting:

“Museums are democratising, inclusive and polyphonic spaces for critical dialogue about the pasts and the futures. Acknowledging and addressing the conflicts and challenges of the present, they hold artefacts and specimens in trust for society, safeguard diverse memories for future generations and guarantee equal rights and equal access to heritage for all people. Museums are not for profit. They are participatory and transparent, and work in active partnership with and for diverse communities to collect, preserve, research, interpret, exhibit, and enhance understandings of the world, aiming to contribute to human dignity and social justice, global equality and planetary wellbeing.”

Fast forward to today, and I come to recognize the bigger issue behind the discussion of the old, bland definition and ICOM’s inability to confirm a new definition.  We as a museum profession do not agree with what a museum is. 

While there is some agreement on what a museum does—collect, preserve, educate—there is not consensus on the museum’s purpose.  From our theoretical perspective at RK&A, purpose is the impact a museum has on people.  What is the positive difference a museum makes in the lives of people?

I had taken it as a given that museums were in agreement that the impact of a museum on people is the purpose.  And certainly there are many who feel impact on people is the essence of museums.  For example, Smithsonian Secretary Lonnie G. Bunch III recently said in an American Alliance of Museums session Racism, Unrest, and the Role of the Museum Field that museums have put education foreward as their vision, noting the success of several museums because of their focus on education, conversation, and collaboration.  More pointedly, Lonnie said, “I think the key is not to forget that we are of the community, of the people, and that our job is service first and foremost.”

Certainly there has been a shift in museums to focus on education.  However, where I have been naïve is in thinking that all museums have made this shift fully.  With any change, it is slow and there is resistance.  This is what became clear to me as I read the President of ICOM’s resignation letter:

“Now it feels like we are becoming more and more self-centred, our minds occupied with self-interests, focused on our own sustainability rather than the sustainability of the whole which we are a part of. Can we have any relevance if we are so detached from the communities we want to serve?”

In our intentional practice work, the impact on audiences is a driving focus for a museum’s work.  While we think that each museum should strive for its own unique impact based on what the museum considers to be their unique qualities, passions, and specific target audiences, there is an underlying assumption that museums want to impact public audiences.  Museums are for the people.

A host of problems emerge if museum professionals cannot agree that museums are for public audiences.  How can we consider our field to be professionalized without an agreed upon definition that drives our best practices and training?  How can we expect individual museum professionals to carry out their work without this shared understanding?  Most importantly, how can we expect individuals to value and support museums if they don’t really know what a museum is—because how could the public know what we are if we as professionals don’t agree on their purpose?

Text " a museum is ... ?" on graphic background

 

The post If we cannot define “museums,” how do museums survive? appeared first on RK&A.

Written by cplysy · Categorized: rka

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 243
  • Go to page 244
  • Go to page 245
  • Go to page 246
  • Go to page 247
  • Interim pages omitted …
  • Go to page 304
  • Go to Next Page »

Footer

Follow our Work

The easiest way to stay connected to our work is to join our newsletter. You’ll get updates on projects, learn about new events, and hear stories from those evaluators whom the field continues to actively exclude and erase.

Get Updates

Want to take further action or join a pod? Click here to learn more.

Copyright © 2026 · The May 13 Group · Log in

en English
af Afrikaanssq Shqipam አማርኛar العربيةhy Հայերենaz Azərbaycan dilieu Euskarabe Беларуская моваbn বাংলাbs Bosanskibg Българскиca Catalàceb Cebuanony Chichewazh-CN 简体中文zh-TW 繁體中文co Corsuhr Hrvatskics Čeština‎da Dansknl Nederlandsen Englisheo Esperantoet Eestitl Filipinofi Suomifr Françaisfy Fryskgl Galegoka ქართულიde Deutschel Ελληνικάgu ગુજરાતીht Kreyol ayisyenha Harshen Hausahaw Ōlelo Hawaiʻiiw עִבְרִיתhi हिन्दीhmn Hmonghu Magyaris Íslenskaig Igboid Bahasa Indonesiaga Gaeilgeit Italianoja 日本語jw Basa Jawakn ಕನ್ನಡkk Қазақ тіліkm ភាសាខ្មែរko 한국어ku كوردی‎ky Кыргызчаlo ພາສາລາວla Latinlv Latviešu valodalt Lietuvių kalbalb Lëtzebuergeschmk Македонски јазикmg Malagasyms Bahasa Melayuml മലയാളംmt Maltesemi Te Reo Māorimr मराठीmn Монголmy ဗမာစာne नेपालीno Norsk bokmålps پښتوfa فارسیpl Polskipt Portuguêspa ਪੰਜਾਬੀro Românăru Русскийsm Samoangd Gàidhligsr Српски језикst Sesothosn Shonasd سنڌيsi සිංහලsk Slovenčinasl Slovenščinaso Afsoomaalies Españolsu Basa Sundasw Kiswahilisv Svenskatg Тоҷикӣta தமிழ்te తెలుగుth ไทยtr Türkçeuk Українськаur اردوuz O‘zbekchavi Tiếng Việtcy Cymraegxh isiXhosayi יידישyo Yorùbázu Zulu