• Skip to main content
  • Skip to footer
  • Home

The May 13 Group

the next day for evaluation

  • Get Involved
  • Our Work
  • About Us
You are here: Home / Archives for allblogs

allblogs

May 21 2020

Evaluation Blogs

So I decided to create what I hope will eventually become an ultimate list of evaluation blogs. Not just a boring bullet point list, but one that gives you a sense of the human being on the other side of the internet.

This post is designed to grow. Meaning I plan to come back and update it. There are already a bunch of evaluation blogs I know well that are not on this page yet. But give it time and they will be added.

This post is in three sections.

  • Section I includes recommendations from the FreshSpectrum Panel of Experts
  • Section II includes all of the bloggers currently part of the main Eval Central blog feed. I’ve pulled a quote from a post for each blogger, and subsequently cartoon illustrated the quote.
  • Section III will eventually include other evaluation blogs.

If all goes as planned, eventually this particular blog post will get very, very, large.

Section I: Blog Recommendations from the FreshSpectrum Panel of Experts

Alright, so these recommendations come from my awesome panel of experts (which is still open for you to join).

Sue Sing Lim

One of my favorite blogs is “We All Count”, a Canadian evaluation firm, who provides useful and practical tips on how to implement equitable evaluation. The founder, Heather Krause, is very generous in sharing her experiences on her successes and challenges when she tried to implement that. 

At We All Count, we agree that Big Data is a valuable resource but we think there are some very important concerns that Big Data alone won’t fix. We think that what’s really exciting about Big Data is the ability to combine the efficiency and power of large datasets with the intentionality of small, curated data samples. 

Why Big Data Needs Small Data

Her blog posts stand out because I found her voice easy to understand and also practical. I don’t feel it is overwhelming or too abstract. I feel like I can take away or do something after reading them and this gives me a sense of empowerment. 

Highly recommend! 

Sue Sing Lim joined Kansas State Research and Extension (KSRE) SNAP-Ed program in 2016 in the role of program evaluator. She is responsible for designing evaluation plan, overseeing data collection, creating evaluation training and workshops, analyzing data, and creating reports to disseminate the results of program impacts. 

Jon Prettyman

Many thanks to Chris for the opportunity to share (and the chance to add lots of new content to my feedly subscriptions)! I recently found Marcus Jenal’s blog, where he comments on complexity and its application in social change processes. It’s worth a look if you’re interested in applying complexity concepts to your evaluation practice. 

I know that in complex systems things are never that neat and never linear causal – there is not one thing in one box that leads to another thing in another box or to an observed behaviour. Reality is messier. I also missed the dynamics in these diagrams – how are these structure created, how do they persist, how do they change?

Systemic change: A dance between structures and events

Jon Prettyman joined the monitoring and evaluation team at Climate-KIC in 2019. He designs and manages evaluation strategies for the portfolio of systems innovation projects. Before joining Climate-KIC, he worked with Mercy Corps on efforts to use emerging technologies for evaluation. 

Christina Gorga

While not entirely related to evaluation, I’m really digging data-based design that’s created across different platforms. Judit Bekker out of Budapest has been producing some killer work with Figma, Tableau, and Adobe Illustrator that gives me lots of inspiration for my own work and how to push beyond default settings. She also gives insight on web safe fonts for Tableau as well as color considerations for passing contrast ratio tests.

Using fonts in Tableau can be a tricky thing because only Tableau web safe fonts will show up the same for everyone. A web safe font is a font that is considered to be a ‘safe bet’ to be installed on the vast majority of computers. Every computer that has a browser installed has default fonts built in so that it can display the text on the web.

How to use fonts in Tableau?

Christina Gorga is a data visualization designer and strategist in Booz Allen’s Health account. She has experience designing reports and interactive Dashboards for program evaluations, state healthcare agencies, HHS, CMS, and VA. She is also an active Tableau community member and loves training teams how to use it. You can reach out to her on LinkedIn or follow her on Twitter at @styleSTEAMed.

Marianne Brittijn

I follow Zenda Ofir, a South African evaluator based in Geneva (https://zendaofir.com/) who plays an active role in the South African Monitoring & Evaluation Association and blogs about current trends and debates in the sector. I discovered the quirky and wise developmental evaluator Carolyn Camman, based in Vancouver,  through their Eval Café Podcast and only read their first blog post today (but have been following them on Twitter for a while).

Two – We have to do much more to display the full value of evaluation for a new era. As the COVID-19 pandemic races around the world, evaluation struggles for space. Research studies and data overwhelm, yet evaluation professionals and studies are not present at influential tables. We have fumbled in proving the value of evaluation for the challenges facing humankind. Let us do our best to show the value of evaluation once the immediate heat of the pandemic is over and we move into sense-making in a changed world.

Transforming Evaluations and COVID-19, Part 4. Accelerating change in practice

Evaluators interested in anything related to developmental evaluation and equity should pay attention to Carolyn (http://www.camman-evaluation.com/).  I also love the ARTD blog (https://www.artd.com.au/read/our-blog/). They’ve been putting out highly practical content that speaks to how organisations can (and should) adapt their M&E during the COVID-19 crisis. The same is true for Feedback Labs (https://feedbacklabs.org/blog/).

Marianne Brittijn is a Monitoring & Evaluation (M&E) practitioner in the development and social justice sector. She conducts external evaluations (ideally participatory and developmental ones), develops organisational M&E systems and facilitates M&E training courses. In addition to her consultancy, she works as a part-time PMEL Officer for CORC and the South African Shack Dwellers International Alliance.

Section II: Eval Central Bloggers

All of the bloggers in this section can be followed via Eval Central.

In order to be part of the main Eval Central blog feed I require permission from the blogger. If you have a blog that you want included, you can submit it for consideration.

I also included myself, because it felt weird not to include myself. Although it also felt weird including myself. I ended up just highlighting a really old blog post (circa 2012).

Amanda Klein

It’s not always easy to measure the impact of family and community engagement efforts. Some aspects of education — like test scores or report card grades — (notwithstanding the wide variety of controversies around their use) are pretty straightforward to measure. They’re already quantified. They’re known entities. We can tell that story. But when we talk about measuring the impact of, say, a super successful family science night, our minds go blank. 

What Will Be Your Story?

Ann K Emery

We also wanted the information to be actionable (duh). We wanted to design a one-page meeting handout that was not only clear but would also give the leaders something to talk about together.

How to aggregate information across sites.

Ann W Price

That’s why I find logic models so darn helpful. They may be despised by some, but I believe they are despised because they are oftentimes overly complicated. (I certainly have been guilty of creating a few that were way too complicated myself). But I have experienced over and over again a situation in which the program staff and leaders just knew they could explain their program clearly. Until we went through a logic model process, and they couldn’t.

A Failure to Plan…You Know the Rest of the Story

Betsy Block

Maybe because I’ve only ever seen hot air balloons from a distance, my memory of them leans towards vibrant orbs, sometimes illuminated, gracefully soaring in the air. I kept working through this image of the hot air balloon, and thinking about what goes into a successful flight. What came to mind was the construction of the balloon itself, how it is sewn, the importance of fabric; and that my personal mission is to be a weaver of a fabric for a stronger community.

It’s in the cut of the cloth.

Beth Snow

a lot of the value of clarifying a program theory comes from the process. Finding out that people aren’t on the same page as one another about what the program is doing and why, identifying gaps in your program’s logic, surfacing assumptions that people involved in the program have – all of this can lead to rich conversations and shared understanding of the program among those involved and you just don’t get that by handing someone a description of a program theory that was created by just one or two people.

Evaluator Competencies Series: Program Theory

Chris Lysy

The null hypothesis is immensely powerful. It doesn’t have to be proven, it just is.

You don’t have to explain why you’re using Word to write a report or Power Point to give a presentation. You don’t have to explain why you present at conferences or write for a journal. They are already accepted, they are the null.

Creative approaches are never the null.

I am the null hypothesis.

Carlos Rodriguez-Ariza

This moment is also a challenge for those of us who work in evaluation functions in the field of international development, where we generally have the luxury of time, operate within clear theories of change, and do our best work when we can mix methods, use multiple data sources and conduct in-depth interviews with a variety of stakeholders .
[translated from Spanish]

Evaluando en tiempos de pandemia

Carolyn Camman

About a year ago I was chatting with someone who had been learning to work with wood. He said found it powerfully healing because you do make mistakes and you can’t reverse them. The mistakes become part of it. You just keep going. I’ve also been taking improv classes and learning the same thing. Whatever happens, you work with it. Fix it by moving forward, not by trying to roll it back and erase it.

Working from the Mistake

Cameron D. Norman

Health. Lastly, how well are we? When the effects of being inside, isolated, and perhaps exposed to a virus are real, present and pervasive, your audience might not be in the state where the depth and quality of thought are what we need to get the responses we want. Many of us are not our usual selves these days and our responses will reflect that.

Better Data Collection

Dana Wanzer

RoE is not conducted for the sake of conducting it, nor is an evidence base of research important unless it is useful and used by the intended audience—in this case, practicing evaluators.

What is Research on Evaluation (ROE)?

Elizabeth Grim

For four years I’ve been pondering – Who am I as an evaluator? Am I even an evaluator? Or am I a social worker with an evaluation and data-driven mindset? Can I be both? Am I a policy analyst who implements data-driven approaches? Am I an advocate that uses data to drive change? How many hats can I wear before my professional identity is so dispersed that it is nonexistent? Does claiming a professional identity even matter?

Wearing Many Hats: Where Does the Role of Evaluator End?

Eval Academy

Recently I was asked by a client about an evaluation literacy course for its board. The client’s board members had just attended a strategic planning day and through that discussion felt they needed education on evaluation and metrics. On one hand I thought “bravo, they want to know more about evaluation!”; on the other hand I thought “shit…., I’ve totally failed them as their evaluator – what have I been missing?”

Strategic Learning and Evaluation – What Boards Need to Know

Katherine Haugh

Re-Imagining Visual Journalism: Illustrations of Malofiej 2019

Michelle Molina

Having a clear understanding of the purpose of the sort of data you are collecting will help you focus on the type of data you should be collecting and the sorts of conversations you should be having when you are making sense of that data.

Three Types of Evaluation for Nonprofits (Simple Overview)

Nicole Clark

Before transitioning, I knew what the end game was: to live life on my terms and help organizations raise their voices for women and girls of color. Once that transition happened, it became harder to stay motivated because everything I was doing was for someone else.

What I’m learning along the way is that it’s ok to question a dream. Which can be difficult when you’ve had a tunnel vision on that dream for so long. I’ve also learned that it’s ok to give yourself permission to try.

Ask Nicole: Why Am I Doing This?

RKA

Numbers had their purpose (they are easily gathered and understood), but they have outlived their usefulness. The silver lining: with nary a visitor to count inside the building, is now not the perfect time to rethink and change how your museum measures success? How does a museum arrive at metrics that will stand the test of time?

Zero

Thomas Winderl

Instead of action language, use change language.Change language reports on the results of an action instead of the action itself.

An example of action language is:

“150,000 girls know how to protect themselves against HIV infection with the support of XYZ”

Use Change Language in Reports

Section III: Other Evaluation Blogs

Alright, so I know that right now I am missing a bunch of evaluation world favorites. And I could bounce around from list to list finding and pulling them together for this post.

But I think blogs are better shared when they come with advocates. Either the bloggers themselves, or others in the evaluation community who love their work.

So if you want your favorites listed here, be their advocate and write a comment.

Like I said at the beginning of this post. This page is intended to grow.

Written by cplysy · Categorized: freshspectrum

May 21 2020

How Writing an Evaluation Report is like Cooking

 

The process of writing an evaluation report is like cooking. It can be a joyful and meditative process for some and an annoying necessity for others. Both cooking and report writing take practice; the more you do them, the more you refine your processes and find your own groove. While there is no formula to create a perfect reporting process, there are some key steps that can set you up for success.

Get to know your audience

Pick your recipe

The process of cooking starts long before ingredients hit the pan. For most, cooking starts with picking a recipe. When you are cooking for others it’s important to figure out what they are hungry for. Is your audience hungry for a full meal, or do they only have time for snacks? Are they meat and potatoes kind of people or does risotto and lobster tickle their fancy?

Ideally, your evaluation plan indicates which information your audience needs to know and your evaluation framework closely ties the data sources to the evaluation questions that need to be answered. Knowing what your audience is expecting helps you narrow your focus in the report writing process. In addition, like cooking, reporting writing is influenced by time, budget, quality and availability of data (ingredients), audience preference, and your comfort and skill level. Develop an evaluation plan (recipe) that will optimize these factors and be prepared to make adjustments as you go along.

Get to know your data

Grocery shopping

Getting to know your data is like going grocery shopping. In the data collection period, things don’t always go to plan. The quantity and quality of your data can vary, just like the availability and quality of produce in the grocery store. In some cases, you can even outsource your data collection, just like online grocery shopping. In this case, it’s important to examine the data you received for quality and ensure you have the information you need to answer your evaluation questions. Take some time to understand what ‘ingredients’ you have and how they fit with your original recipe.

Analysis

Prepping your ingredients

Analysis has been referred to as ‘slicing and dicing’ data, a cringe-worthy term that fits delightfully into this report writing metaphor. In cooking, how fine or course your chop up your ingredients is directed by your recipe. How you prepare your ingredients is going to affect the final product. Again, like grocery shopping, prepping your ingredients can be done by a sous chef, but how it’s done will affect how you, as the head chef, produce a final product. The recipe you choose will give you a plan and starting place.

In report writing, your analysis should be guided by your evaluation plan. Your evaluation plan lays out the questions you aim to answer and the indicators you have collected to do so. As you analyze the data, keep in mind how you plan on presenting your data. While a data analyst may be analyzing the data for you, a clear path helps lead to clear results.

Regardless of when you consider your report structure, it’s important to consider how the information fits together as you analyze the data.

Outline, outline, outline

Make a Plan

Next up, make your plan. You have all the ingredients, they are prepped, but what order do you tackle things in? I must confess, this is where the cooking metaphor breaks down a little. In cooking, you aim to have all the components of your dish ready at roughly the same time so that everything is served hot. Because of this, you need to be aware of timings and have a game plan in mind of what to tackle, when. In report writing, the parallel isn’t so strong. The pieces of data don’t all need to come together in a time-ordered way. However, outlining your report (developing your game plan) is a valuable step to help your findings flow together. The processes of outlining your report helps you develop a strong narrative in your writing. A strong outline will help you to remember your guiding questions and key findings as you write.

Key considerations when developing a report outline includes examining how your findings fit together. Does your data flow well together, or do you need a way to cohesively present distinct pieces of information? How will your audience best digest the information? Sketching out the structure of your report provides the structure that allows your audience to follow your logic.

Write

Cook

I’ve spent a long time discussing the cooking steps that don’t involve the actual transformation of ingredients into a dish, and this is on purpose. Preparation is key. Having a recipe, the ingredients to make the recipe, prepping the ingredients, and having a game plan of how they are all going to come together is half the battle. These are all steps you can do in advance to make the cooking process less arduous. The same holds true for writing your evaluation report. Once you understand what your audience wants, what the data shows, and how you are going to piece it all together into a cohesive narrative, writing the sentence and paragraphs is much easier.

Just like everyone’s cooking process is different — some use every pot and pan in the kitchen and leave the cupboards open as they go, while others work meticulously and stepwise — everyone’s writing process is different. There are lots of articles and pieces of advice on how to write; common tips include “eat the frog first”, or “always stop writing for the day when you have more to say.” Engage in your reflexive practice and identify how and where you write best. If you get stuck, got back to your evaluation questions. These are your touchstones.

Cooking can be a constant flow back and forth between prepping your ingredients and cooking. Sometimes you may even find you need to run out to the grocery store for more of something. When you are cooking, it can be helpful to grab a second opinion to taste as you go along. Have you been too heavy-handed with some flavours? Have you drifted too far from the recipe? Is the product well balanced? These principles also apply to evaluation report writing. Sometimes as you write you may notice gaps in the data and need to collect more, or you may find you rely too heavily on one set of findings and neglect to give the other findings time to shine. Getting feedback from peers about what they take away from your report can help you make sure you are getting your key messages across.

Editing

Trim the Fat

Most people tend to dread the editing process. You have poured your energy into crafting the pages in front of you, but that effort won’t necessarily be appreciated as-is. It’s time to trim the fat. Be ruthless. Do you need each sentence, each piece of information? How much can your audience digest? How much detail do they really need? Turn back to your evaluation questions — how does the data you present answer the questions and how much detail does your audience need?

In the world of cooking, consider how hungry your audience is. Are they full and can only handle small tidbits of digestible information? Or are they ravenous and will eat a 7-course meal without complaint? Your evaluation report is no good if you are providing a 7-course meal and your audience is only interested in the main dish.

Visualization

Food Styling

I have left visualization for last, but I don’t believe it comes last, chronologically. Rather, it’s a topic all to itself that many others have touched on and can and should come throughout the process. Sometimes visualizing the data can be just as powerful as writing about it. A good visualization helps you to make your point and a bad one muddies the water and puts your audience to sleep.

At this point, you may be overwhelmed at the number of hats you are supposed to wear—from recipe developer, to shopper, sous chef, head chef, and now food stylist. Don’t be alarmed. You don’t have to be a pro chef to learn some simple tricks that spruce up your presentation, like twirling the plate, not the spaghetti, to create an Instagram-ready plate of spaghetti. There are lots of simple visualization tricks you can employ to make your presentation more appealing (like Evergreen Data and Depict Data Studio). People eat with their eyes first and in the world of evaluation reports, appearance can make a big difference.

Let’s be honest, this part is often a little rushed and is not the time to be developing the stylistic elements of your report. It is, however, the time to be tweaking them. Small things that make a big impact include: using colour selectively, adjusting your heading styles, and adding white space so your words can breathe. Use stylistic elements like colour, icons, and fonts to link ideas and make key points stand out.

The visualizing and stylistic components are the icing on the cake — considered frivolous or extra by some, the thing you usually run out of time for, and something that can take ages if you aren’t careful — but they are essential and what makes people interested in trying what you’ve made. No matter how good your meal tastes, if it doesn’t look good, people aren’t going to want to eat it. No matter how good the findings in your evaluation report are, if they are not presented in a way that’s intuitive to understand and appealing, they are unlikely to gain much traction.

 

Conclusion

It’s time for an apology. I promised to show you how writing an evaluation report is like cooking, and I spent the majority of the time explaining how to do everything else before and after the writing part. Hopefully you can see that writing evaluation reports is not just about writing the words that make up the report. It’s a process that starts as the evaluation plan is developed and continues as you collect and analyze data. A strong evaluation report starts with a strong evaluation plan. Like cooking, there are steps you can follow to make the report writing process easier. Developing your reporting style and process takes practice. Don’t be afraid to get feedback from your peers and clients and try to have fun with it!


Sign up for our newsletter

We’ll let you know about our new content, and curate the best new evaluation resources from around the web!


We respect your privacy.

Thank you!


 

Written by cplysy · Categorized: evalacademy

May 20 2020

Reaching Audiences Beyond the Internet During a Pandemic

When museums started closing to prevent the spread of COVID-19 in early March, there were surges of digital initiatives across all institutions meant to reach new and existing audiences while doors were closed. It’s an amazing and overwhelming time to be involved in the digital side of museums. However, I propose that online resources are not the only way to connect with people in these times.  While looking back through RK&A’s research over the years, I came across a couple of projects in which we evaluated museums “on the go” or programs that brought the museum outside its walls to a variety of venues.

Two women looking at a black and white drawing placed along a trail
Image of women looking at Inside |Out reproductions. Image was taken by RK&A staff.

One such example is Inside|Out, a program that began at the Detroit Institute of Arts (DIA) and has been replicated at other museums with funding from the Knight Foundation.  The Inside|Out program brings reproductions of works from the museum’s collection to outdoor venues like parks and storefronts in communities surrounding the museum. In two studies, one for the DIA and one for the Knight Foundation, RK&A looked at the impact of the program on communities. In interviews, community partners and business owners in the neighborhoods that hosted art installations indicated the program fostered pride in the community, stimulated conversation about art among community members, cultivated the community’s interest in the arts, and positively enhanced the identity of the community. Community members who were interviewed onsite at an existing outdoor installation indicated they had positive attitudes towards the Inside|Out program and the museum, and some expressed delight at the sight of art in an unusual location.

Similarly, the Baltimore Museum of Art (BMA) had a program called BMA Outpost (the Outpost), which was a mobile art museum that was set up at a variety of temporary sites.  The Outpost aimed to serve as a forum for conversations about place, home, and why the city of Baltimore matters to its residents. Visitors to the Outpost could see replicas of works from the BMA’s collection and participate in a variety of programming opportunities (i.e. creating works of art about home).  After conducting case studies for ten of the former sites, we found that the Outpost was a positive experience for participants. They found it both empowering and therapeutic. Most notably, the Outpost functioned in a healing role at many of the sites. For example, during the Outpost’s tenure at Healthcare for the Homeless, participants said the installation helped them cope with their anger stemming from their homelessness. When the Outpost installation was at YO! Baltimore, students talked about the Outpost as a means of “escape” from difficult home situations.

What we can learn from these programs in light of today’s new realities is that there is a benefit to taking the museum outside of its walls (both physical and digital). Bringing art to local communities can reinforce awareness of the institution and strengthen communities’ sense of art and self. Programs that bring art reproductions to outdoor community spaces like parks or commercial districts are especially important in these times of digital content proliferation, as not all audiences are aware of online resources or even have easy access to the internet.

Some museums and artists have already started incorporating artwork into new venues. The Crystal Bridges Museum in Arkansas is displaying artwork outside hospitals and senior living facilities to help those who may be suffering from the effects of isolation. Additionally, an outdoor exhibition in Long Island featured works by 52 artists in a drive-by exhibition where spontaneous interactions, such as artists waving to visitors and viewing the art where artists live, led to a unique experience.

Personally, I’ve been spending an increased amount of time outdoors despite allergy season. I find taking walks with my dog to be one of my main stress relievers. I typically walk along a popular trail in Alexandria. I can’t help but think that that spaces like this trail would be livened up with a little art.

 

You can read more about the Inside|Out program evaluation here: https://www.informalscience.org/summative-evaluation-inside-out-program

You can read more about the Outpost case studies here: https://www.informalscience.org/case-studies-bma-outpost

The post Reaching Audiences Beyond the Internet During a Pandemic appeared first on RK&A.

Written by cplysy · Categorized: rka

May 19 2020

Evaluación Centrada en el Uso (UFE): 17 Pasos para hacer del uso una realidad

Continuando con la Evaluación Centrada en el Uso: Usos y usuarios directos definidos,  seguimos en torno a  este texto de Patton, traducción desde la plataforma Better evaluation de una referencia a su “Utilisation Focused Evaluation”:

UFE se puede utilizar para diferentes tipos de evaluación (formativa, sumativa, de proceso, de impacto) y puede basarse en diferentes diseños de evaluación y tipos de datos.

El marco UFE se puede utilizar de varias maneras según el contexto y las necesidades de la situación. Se puede consultar la Lista de Verificación de la Evaluación Centrada en el Uso (U-FE) y la última actualización, que consta de 17 pasos, se describe a continuación:

  1. Evaluar y desarrollar el programa y la preparación organizacional para la evaluación centrada en el uso
  2. Evaluar y mejorar la preparación y competencia del evaluador para emprender una evaluación centrada en el uso
  3. Identificar, organizar e involucrar a los principales usuarios previstos: el factor personal
  4. Analizar la situación conjuntamente con los principales usuarios previstos
  5. Identificar y priorizar los principales usos previstos al determinar los propósitos prioritarios
  6. Considerar y prever (diseñar / construir) usos del proceso si corresponde y según corresponda
  7. Enfocar y acotar las preguntas prioritarias de evaluación
  8. Comprobar que las áreas fundamentales para la evaluación de la evaluación se están abordando adecuadamente: proceso de ejecución, resultados y preguntas de atribución
  9. Determinar qué modelo de intervención o teoría de cambio se está evaluando
  10. Negociar métodos apropiados para generar hallazgos creíbles que respalden el uso previsto por los usuarios previstos
  11. Asegurar de que los usuarios previstos entiendan los retos de los posibles métodos y sus implicaciones
  12. Simular el uso de los hallazgos: el equivalente de la evaluación de un ensayo general
  13. Recopilar datos con atención continua para su uso
  14. Organizar y presentar los datos para interpretación y uso por parte de los principales usuarios previstos: análisis, interpretación, juicio y recomendaciones
  15. Preparar un informe de evaluación para facilitar el uso y difundir los hallazgos significativos para expandir la influencia
  16. Seguimiento con los principales usuarios previstos para facilitar y mejorar el uso
  17. Meta-evaluación de uso: ser responsable, aprender y mejorar.

 

Written by cplysy · Categorized: TripleAD

May 19 2020

Evaluation in a Low-resource Setting: Strategies for Success

 

“Start where you are. Use what you have. Do what you can.”

— Arthur Ashe

Working in the evaluation field is appealing in that it can take place across various sectors, systems and geographic locations. One area of particular interest for some evaluators, like myself, is program evaluation in low-resource settings (LRS). LRS, sometimes referred to as “resource poor” or “resource strained,” indicates countries or regions that lack the financial means to cover the costs associated with infrastructure, healthcare and/or trained professionals – as well as other system or societal needs.  These evaluations are often requested by funders or granting agencies as a means of providing evidence for effective use of funds. Likewise, evaluations are particularly important in LRS as programs need to operate as efficiently as possible due to limited human and material resources.

Program evaluations in a LRS can be challenging in that the program staff may not have the skills or capacity to see it through. However, agencies often require formal evaluation of their funded programs which may lead to the hiring of a contract evaluator. Take USAID for example, they provide funding to LRS programs all over the world and emphasize the need to embed evaluation in program planning. USAID values evaluation so much that they have developed policies to enforce and support agencies in meeting evaluation requirements (see USAID Evaluation Policies).

In my (very biased) opinion, I think that LRS evaluations can be complex, but in the long run will help programs as well as the population and country in which they take place. Moreover, evaluation is often viewed as something that must be done for funders, but in many/most cases is just as valuable to the program itself and it provides a source of evidence to inform program planning. As evaluators we need to advocate for the use of evaluations for both funders and their funded programs – the best way is to show what we can do by conducting meaningful evaluations. We already have the skillset to conduct evaluations of all shapes and sizes (impact, development, process, etc.) – but in a LRS we have to be a bit more creative and thrifty in our approach.

As such, I offer four strategies for success when evaluating within a LRS: Be Tech Savvy, Consider Capacity, Be Ready to Adjust, and Use Yourself as a Resource. As you explore these strategies, I am confident that many will already be a part of your practice in all settings (not just LRS). However, I would encourage you to lean in to the strategies even more when you find yourself in a LRS. Lastly, all evaluations in both high- and low-resource regions are susceptible to being thrown curveballs (take COVID-19 as an example) – meaning that as evaluators we need to be ready to engage new strategies on short notice.

Be Tech Savvy

So you find yourself in a LRS or perhaps a very low budget evaluation, but you still need to exchange information and data with stakeholders. In this case, you could even sub out LRS for “pandemic” and the following advice will still apply.

Even if the program has been using the same spreadsheet since 1995 or has a VPN that takes what seems like 2 days to connect – try to utilize existing systems. When capacity is limited, implementing new systems may result in lost time or frustration.

If you find yourself in a setting where staff are eager to have better software or have requested new systems for their information – start by being proud of your forward-thinking team/client but be cautious of available capacity for both finances and time. What I mean by this is to utilize free (or low cost) and easily accessible software before signing the organization up for a pricey software package.

  • Sharing Information: Consider free tools such as G Suite or DropBox to share files and work on documents (reports, spreadsheets, presentations) simultaneously. These are less expensive (or free for basic accounts) compared to platforms such as Office 365. Note that many of these tools have offline capability meaning that you can work on the documents even when internet connection is limited.

  • Communicating: The COVID-19 pandemic showed us the many available resources for connecting. In my experience, many of the free services are just as good. Zoom or GoToMeeting are popular, but only offer a free trial period or require a subscription for longer meetings. Alternatively, well-known platforms such as Skype, Google Hangouts (part of G Suite), or even Facebook through their Messenger Rooms can be used for free video calls. WhatsApp and Slack are also great apps for communicating with teams or clients. These are just a few examples of the many communication tools available to evaluators.

Consider Capacity

Evaluations can often be perceived as a capacity strain, whether it be through using staff time or the cost of hiring an external consultant. Given the capacity limitations inherent to LRS, evaluators should keep capacity at the forefront of their minds from the first meeting to the final iteration of the report. Some examples of capacity considerations during an evaluation include:

  • Developing the work plan and timeline: Try to plan meetings so that they occur only when necessary and include relevant stakeholders. Regular check-in meetings may not be helpful and may take staff away from program delivery – consider 1-2 page status reports instead.

  • Creating evaluation plan/questions: We often evaluate efficiency of programs, but it would also be helpful if evaluators put more of an emphasis on capacity or even a sub-section focused on capacity. Develop questions that are feasible in a LRS and likely to uncover actionable findings (not just funder-mandated metrics). Check out Eval Academy’s How to Write Good Evaluation Questions for guidance on writing evaluation questions.

  • Collecting data: Try to pull from existing data, add questions to regularly administered surveys, or plan focus groups for days where stakeholders are already in the same location. If appropriate, consider methodologies such as Participatory Action Research (PAR) to build evaluation/research capacity of staff (Baum et al., 2006).

  • Presenting findings & recommendations: Include capacity building or capacity considerations for all recommendations – after all, recommendations are not likely to be adopted or sustained without capacity.

Whether you are an internal or external evaluator in a LRS, the program or agency should feel like you contributed capacity whether it be through the new knowledge, sound recommendations or development of internal evaluation capacity.

Be Ready to Adjust

As evaluators we like plans and go into projects with a clear timeline and guide of what we would like to accomplish. Does it always go as planned? Definitely not! In all engagements we need to be ready to pivot or adjust our plan – this is even more true for evaluations in LRSs.

As I write this article, we are still deep in the uncertainty of COVID-19. Although many evaluators are already comfortable with remote working, many aspects of conducting an evaluation have had to change. Couple this with the complexities of a LRS and there is no choice but to be flexible and ready to adjust evaluations accordingly.

I expect that evaluators will have many successes (and some failures) to share about their evaluation practice during these times. For now, I will offer a couple suggestions that I have found to be helpful working on a LRS project during a pandemic.

  • Information gathering: Focus groups, interviews and surveys may need to be facilitated online. Likewise, program staff may need to extract data and share with the evaluator remotely. Workshop facilitation may require the evaluator to search for facilitation tools embedded in video chat platforms (such as surveys) or collaborative tools (such as a shared whiteboard).

  • Focus on the most important aspects: Some key evaluation topics or focus areas may need to change. Consider resource constraints or competing priorities (pandemic related or not) and the impact on the program you are evaluating. Ask questions like:

    • Are the evaluation topics/questions still valid?

    • Are the various evaluation phases feasible remotely and/or with less access to staff and the population being served?

    • How can evaluation evidence support the program as it adapts for COVID-19 or other resource constraints?

  • Revisit the timeline: Expect the timeline to change – whether it is rescheduling some of the meetings, changing them to virtual meetings, or completely revising the project work plan. For example, if the evaluation capacity is severely limited due to a pandemic or other LRS obstacles, look at postponing until there is more capacity. It may be better to postpone or extend the timeline rather than sacrificing the quality of the evaluation.

Use Yourself as a Resource

In a LRS, answers to your questions or material to inform both the planning and execution of an evaluation may not be easily accessible (or in some cases may not be known). As an evaluator for a LRS program, internal or external, you will likely need to be your own resource when it comes to finding data, identifying stakeholders or to develop a general understanding of the program. Hopefully the program or agency at the center of the evaluation will be willing to share all relevant documents and offer some context – but in most cases you will need to be ready to dive in to uncover more. Here are a couple of tips to further explain this strategy:

  • Hands-on learning: Rather than taking the capacity (resources or staff) to learn about the program or agency being evaluated, consider shadowing or observing the program (activities and meetings). This will prevent the evaluation from eating up too much staff time and I would argue that it will also provide you with a richer understanding of program being evaluated.

  • Self-led professional development: There may be minimal options for professional development in LRS, especially if you are working as an internal evaluator. Connecting with other evaluators or professionals working on LRS projects is a great place to start. There may be existing Communities of Practice (in-person or virtual) or in my experience individuals working in the same region are more than happy to share their experiences. For evaluation specific education or new methods (even seasoned evaluators need some inspiration every once in a while!) consider online resources, like EvalAcademy.

These strategies were summarized with LRS evaluations in mind, but I am confident they can be adopted for evaluation projects in all settings. If you have any other resources or strategies for evaluating in a LRS, please comment below.

 

Resource

Baum F, MacDougall C, Smith D. Participatory action research. J Epidemiol Community Health. 2006;60(10):854–857. doi:10.1136/jech.2004.028662


Sign up for our newsletter

We’ll let you know about our new content, and curate the best new evaluation resources from around the web!


We respect your privacy.

Thank you!


 

Written by cplysy · Categorized: evalacademy

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 264
  • Go to page 265
  • Go to page 266
  • Go to page 267
  • Go to page 268
  • Interim pages omitted …
  • Go to page 310
  • Go to Next Page »

Footer

Follow our Work

The easiest way to stay connected to our work is to join our newsletter. You’ll get updates on projects, learn about new events, and hear stories from those evaluators whom the field continues to actively exclude and erase.

Get Updates

Want to take further action or join a pod? Click here to learn more.

Copyright © 2026 · The May 13 Group · Log in

en English
af Afrikaanssq Shqipam አማርኛar العربيةhy Հայերենaz Azərbaycan dilieu Euskarabe Беларуская моваbn বাংলাbs Bosanskibg Българскиca Catalàceb Cebuanony Chichewazh-CN 简体中文zh-TW 繁體中文co Corsuhr Hrvatskics Čeština‎da Dansknl Nederlandsen Englisheo Esperantoet Eestitl Filipinofi Suomifr Françaisfy Fryskgl Galegoka ქართულიde Deutschel Ελληνικάgu ગુજરાતીht Kreyol ayisyenha Harshen Hausahaw Ōlelo Hawaiʻiiw עִבְרִיתhi हिन्दीhmn Hmonghu Magyaris Íslenskaig Igboid Bahasa Indonesiaga Gaeilgeit Italianoja 日本語jw Basa Jawakn ಕನ್ನಡkk Қазақ тіліkm ភាសាខ្មែរko 한국어ku كوردی‎ky Кыргызчаlo ພາສາລາວla Latinlv Latviešu valodalt Lietuvių kalbalb Lëtzebuergeschmk Македонски јазикmg Malagasyms Bahasa Melayuml മലയാളംmt Maltesemi Te Reo Māorimr मराठीmn Монголmy ဗမာစာne नेपालीno Norsk bokmålps پښتوfa فارسیpl Polskipt Portuguêspa ਪੰਜਾਬੀro Românăru Русскийsm Samoangd Gàidhligsr Српски језикst Sesothosn Shonasd سنڌيsi සිංහලsk Slovenčinasl Slovenščinaso Afsoomaalies Españolsu Basa Sundasw Kiswahilisv Svenskatg Тоҷикӣta தமிழ்te తెలుగుth ไทยtr Türkçeuk Українськаur اردوuz O‘zbekchavi Tiếng Việtcy Cymraegxh isiXhosayi יידישyo Yorùbázu Zulu