• Skip to main content
  • Skip to footer
  • Home

The May 13 Group

the next day for evaluation

  • Get Involved
  • Our Work
  • About Us
You are here: Home / Archives for cplysy

cplysy

May 19 2020

Evaluación Centrada en el Uso (UFE): 17 Pasos para hacer del uso una realidad

Continuando con la Evaluación Centrada en el Uso: Usos y usuarios directos definidos,  seguimos en torno a  este texto de Patton, traducción desde la plataforma Better evaluation de una referencia a su “Utilisation Focused Evaluation”:

UFE se puede utilizar para diferentes tipos de evaluación (formativa, sumativa, de proceso, de impacto) y puede basarse en diferentes diseños de evaluación y tipos de datos.

El marco UFE se puede utilizar de varias maneras según el contexto y las necesidades de la situación. Se puede consultar la Lista de Verificación de la Evaluación Centrada en el Uso (U-FE) y la última actualización, que consta de 17 pasos, se describe a continuación:

  1. Evaluar y desarrollar el programa y la preparación organizacional para la evaluación centrada en el uso
  2. Evaluar y mejorar la preparación y competencia del evaluador para emprender una evaluación centrada en el uso
  3. Identificar, organizar e involucrar a los principales usuarios previstos: el factor personal
  4. Analizar la situación conjuntamente con los principales usuarios previstos
  5. Identificar y priorizar los principales usos previstos al determinar los propósitos prioritarios
  6. Considerar y prever (diseñar / construir) usos del proceso si corresponde y según corresponda
  7. Enfocar y acotar las preguntas prioritarias de evaluación
  8. Comprobar que las áreas fundamentales para la evaluación de la evaluación se están abordando adecuadamente: proceso de ejecución, resultados y preguntas de atribución
  9. Determinar qué modelo de intervención o teoría de cambio se está evaluando
  10. Negociar métodos apropiados para generar hallazgos creíbles que respalden el uso previsto por los usuarios previstos
  11. Asegurar de que los usuarios previstos entiendan los retos de los posibles métodos y sus implicaciones
  12. Simular el uso de los hallazgos: el equivalente de la evaluación de un ensayo general
  13. Recopilar datos con atención continua para su uso
  14. Organizar y presentar los datos para interpretación y uso por parte de los principales usuarios previstos: análisis, interpretación, juicio y recomendaciones
  15. Preparar un informe de evaluación para facilitar el uso y difundir los hallazgos significativos para expandir la influencia
  16. Seguimiento con los principales usuarios previstos para facilitar y mejorar el uso
  17. Meta-evaluación de uso: ser responsable, aprender y mejorar.

 

Written by cplysy · Categorized: TripleAD

May 19 2020

Evaluation in a Low-resource Setting: Strategies for Success

 

“Start where you are. Use what you have. Do what you can.”

— Arthur Ashe

Working in the evaluation field is appealing in that it can take place across various sectors, systems and geographic locations. One area of particular interest for some evaluators, like myself, is program evaluation in low-resource settings (LRS). LRS, sometimes referred to as “resource poor” or “resource strained,” indicates countries or regions that lack the financial means to cover the costs associated with infrastructure, healthcare and/or trained professionals – as well as other system or societal needs.  These evaluations are often requested by funders or granting agencies as a means of providing evidence for effective use of funds. Likewise, evaluations are particularly important in LRS as programs need to operate as efficiently as possible due to limited human and material resources.

Program evaluations in a LRS can be challenging in that the program staff may not have the skills or capacity to see it through. However, agencies often require formal evaluation of their funded programs which may lead to the hiring of a contract evaluator. Take USAID for example, they provide funding to LRS programs all over the world and emphasize the need to embed evaluation in program planning. USAID values evaluation so much that they have developed policies to enforce and support agencies in meeting evaluation requirements (see USAID Evaluation Policies).

In my (very biased) opinion, I think that LRS evaluations can be complex, but in the long run will help programs as well as the population and country in which they take place. Moreover, evaluation is often viewed as something that must be done for funders, but in many/most cases is just as valuable to the program itself and it provides a source of evidence to inform program planning. As evaluators we need to advocate for the use of evaluations for both funders and their funded programs – the best way is to show what we can do by conducting meaningful evaluations. We already have the skillset to conduct evaluations of all shapes and sizes (impact, development, process, etc.) – but in a LRS we have to be a bit more creative and thrifty in our approach.

As such, I offer four strategies for success when evaluating within a LRS: Be Tech Savvy, Consider Capacity, Be Ready to Adjust, and Use Yourself as a Resource. As you explore these strategies, I am confident that many will already be a part of your practice in all settings (not just LRS). However, I would encourage you to lean in to the strategies even more when you find yourself in a LRS. Lastly, all evaluations in both high- and low-resource regions are susceptible to being thrown curveballs (take COVID-19 as an example) – meaning that as evaluators we need to be ready to engage new strategies on short notice.

Be Tech Savvy

So you find yourself in a LRS or perhaps a very low budget evaluation, but you still need to exchange information and data with stakeholders. In this case, you could even sub out LRS for “pandemic” and the following advice will still apply.

Even if the program has been using the same spreadsheet since 1995 or has a VPN that takes what seems like 2 days to connect – try to utilize existing systems. When capacity is limited, implementing new systems may result in lost time or frustration.

If you find yourself in a setting where staff are eager to have better software or have requested new systems for their information – start by being proud of your forward-thinking team/client but be cautious of available capacity for both finances and time. What I mean by this is to utilize free (or low cost) and easily accessible software before signing the organization up for a pricey software package.

  • Sharing Information: Consider free tools such as G Suite or DropBox to share files and work on documents (reports, spreadsheets, presentations) simultaneously. These are less expensive (or free for basic accounts) compared to platforms such as Office 365. Note that many of these tools have offline capability meaning that you can work on the documents even when internet connection is limited.

  • Communicating: The COVID-19 pandemic showed us the many available resources for connecting. In my experience, many of the free services are just as good. Zoom or GoToMeeting are popular, but only offer a free trial period or require a subscription for longer meetings. Alternatively, well-known platforms such as Skype, Google Hangouts (part of G Suite), or even Facebook through their Messenger Rooms can be used for free video calls. WhatsApp and Slack are also great apps for communicating with teams or clients. These are just a few examples of the many communication tools available to evaluators.

Consider Capacity

Evaluations can often be perceived as a capacity strain, whether it be through using staff time or the cost of hiring an external consultant. Given the capacity limitations inherent to LRS, evaluators should keep capacity at the forefront of their minds from the first meeting to the final iteration of the report. Some examples of capacity considerations during an evaluation include:

  • Developing the work plan and timeline: Try to plan meetings so that they occur only when necessary and include relevant stakeholders. Regular check-in meetings may not be helpful and may take staff away from program delivery – consider 1-2 page status reports instead.

  • Creating evaluation plan/questions: We often evaluate efficiency of programs, but it would also be helpful if evaluators put more of an emphasis on capacity or even a sub-section focused on capacity. Develop questions that are feasible in a LRS and likely to uncover actionable findings (not just funder-mandated metrics). Check out Eval Academy’s How to Write Good Evaluation Questions for guidance on writing evaluation questions.

  • Collecting data: Try to pull from existing data, add questions to regularly administered surveys, or plan focus groups for days where stakeholders are already in the same location. If appropriate, consider methodologies such as Participatory Action Research (PAR) to build evaluation/research capacity of staff (Baum et al., 2006).

  • Presenting findings & recommendations: Include capacity building or capacity considerations for all recommendations – after all, recommendations are not likely to be adopted or sustained without capacity.

Whether you are an internal or external evaluator in a LRS, the program or agency should feel like you contributed capacity whether it be through the new knowledge, sound recommendations or development of internal evaluation capacity.

Be Ready to Adjust

As evaluators we like plans and go into projects with a clear timeline and guide of what we would like to accomplish. Does it always go as planned? Definitely not! In all engagements we need to be ready to pivot or adjust our plan – this is even more true for evaluations in LRSs.

As I write this article, we are still deep in the uncertainty of COVID-19. Although many evaluators are already comfortable with remote working, many aspects of conducting an evaluation have had to change. Couple this with the complexities of a LRS and there is no choice but to be flexible and ready to adjust evaluations accordingly.

I expect that evaluators will have many successes (and some failures) to share about their evaluation practice during these times. For now, I will offer a couple suggestions that I have found to be helpful working on a LRS project during a pandemic.

  • Information gathering: Focus groups, interviews and surveys may need to be facilitated online. Likewise, program staff may need to extract data and share with the evaluator remotely. Workshop facilitation may require the evaluator to search for facilitation tools embedded in video chat platforms (such as surveys) or collaborative tools (such as a shared whiteboard).

  • Focus on the most important aspects: Some key evaluation topics or focus areas may need to change. Consider resource constraints or competing priorities (pandemic related or not) and the impact on the program you are evaluating. Ask questions like:

    • Are the evaluation topics/questions still valid?

    • Are the various evaluation phases feasible remotely and/or with less access to staff and the population being served?

    • How can evaluation evidence support the program as it adapts for COVID-19 or other resource constraints?

  • Revisit the timeline: Expect the timeline to change – whether it is rescheduling some of the meetings, changing them to virtual meetings, or completely revising the project work plan. For example, if the evaluation capacity is severely limited due to a pandemic or other LRS obstacles, look at postponing until there is more capacity. It may be better to postpone or extend the timeline rather than sacrificing the quality of the evaluation.

Use Yourself as a Resource

In a LRS, answers to your questions or material to inform both the planning and execution of an evaluation may not be easily accessible (or in some cases may not be known). As an evaluator for a LRS program, internal or external, you will likely need to be your own resource when it comes to finding data, identifying stakeholders or to develop a general understanding of the program. Hopefully the program or agency at the center of the evaluation will be willing to share all relevant documents and offer some context – but in most cases you will need to be ready to dive in to uncover more. Here are a couple of tips to further explain this strategy:

  • Hands-on learning: Rather than taking the capacity (resources or staff) to learn about the program or agency being evaluated, consider shadowing or observing the program (activities and meetings). This will prevent the evaluation from eating up too much staff time and I would argue that it will also provide you with a richer understanding of program being evaluated.

  • Self-led professional development: There may be minimal options for professional development in LRS, especially if you are working as an internal evaluator. Connecting with other evaluators or professionals working on LRS projects is a great place to start. There may be existing Communities of Practice (in-person or virtual) or in my experience individuals working in the same region are more than happy to share their experiences. For evaluation specific education or new methods (even seasoned evaluators need some inspiration every once in a while!) consider online resources, like EvalAcademy.

These strategies were summarized with LRS evaluations in mind, but I am confident they can be adopted for evaluation projects in all settings. If you have any other resources or strategies for evaluating in a LRS, please comment below.

 

Resource

Baum F, MacDougall C, Smith D. Participatory action research. J Epidemiol Community Health. 2006;60(10):854–857. doi:10.1136/jech.2004.028662


Sign up for our newsletter

We’ll let you know about our new content, and curate the best new evaluation resources from around the web!


We respect your privacy.

Thank you!


 

Written by cplysy · Categorized: evalacademy

May 18 2020

Evaluación Centrada en el Uso: Usos y usuarios directos definidos

En ocasiones podemos caer en la tentación de pensar que toda evaluación se centra en el uso, pero no es cierto. Especialmente con “Evaluación Centrada en el Uso”, nos referimos a un enfoque con pasos sistemáticos, en el que uso y usuarios de la evaluación están bien definidos:

La Evaluación Centrada en el Uso (UFE, por sus siglas en inglés), desarrollada por Michael Quinn Patton, es un enfoque basado en el principio de que una evaluación debe ser juzgada por su utilidad para los usuarios previstos. Por lo tanto, las evaluaciones deben planearse y llevarse a cabo de manera que se mejore el posible uso tanto de los hallazgos como del proceso en sí para informar las decisiones y mejorar el desempeño.

La UFE tiene dos elementos esenciales.

En primer lugar, los principales usuarios previstos de la evaluación deben estar claramente identificados y comprometidos personalmente al comienzo del proceso de evaluación para garantizar que puedan identificarse sus principales usos previstos.

En segundo lugar, los evaluadores deben asegurarse de que estos usos previstos de la evaluación, por parte de los principales usuarios previstos, guíen todas las demás decisiones que se tomen sobre el proceso de evaluación.

En lugar de centrarse en usuarios y usos generales y abstractos, UFE se centra en usuarios y usos reales y específicos. El trabajo del evaluador no es tomar decisiones independientemente de los usuarios previstos, sino facilitar la toma de decisiones entre las personas que utilizarán los resultados de la evaluación.

Patton argumenta que la investigación sobre la evaluación demuestra que: “Es más probable que los usuarios previstos utilicen las evaluaciones si entienden y tienen sentido de apropiación del proceso y los hallazgos de la evaluación [y que] es más probable que entiendan y se sientan adueñados si han participado activamente. Al involucrar activamente a los principales usuarios previstos, el/la evaluador/a está preparando las bases para su uso”.

 

Utilisation Focused Evaluation (Patton, 2008, Capítulo 3).

Written by cplysy · Categorized: TripleAD

May 18 2020

Collective Impact Forum (Lessons Learned)

Written by cplysy · Categorized: connectingevidence

May 18 2020

Practical Evaluation Tips in a Time of Crisis

Hi everyone-

Today I am joined by Jenn Ballentine of Highland Nonprofit Consulting to talk about, what else, evaluation in the time of Covid-19. Granted, my last blog was a bit of a rant, so today, I would like to strike a  more positive and helpful tone.  

To tell you the truth, some of the conversation around data collection during the pandemic has me a little squirmy because it has felt kind of opportunistic. I don’t think rushing out to survey people when they are really worried and anxious feels helpful, or frankly ethical.

But we are evaluators, so we do believe evaluation is important and we just can’t stop doing what we do. I am a community psychologist and Jenn, a public health professional. We believe in a public health approach to prevention and in systems-level change. And if anything, this pandemic should teach us is that we are all connected. Systems-level change is needed now more than ever to correct the inequities in our society so evident in disproportionate impact of COVID-19 in communities of color.

Adaptions in the Time of Crisis

Sanjeev Sridharan recently wrote a thoughtful and poignant piece called Adaptions and Nimbleness in the Time of Crisis: Some Questions for Evaluators. In it, he observes that both program implementers and evaluators must now think about how to adapt. He raises a set of questions for evaluators to consider, and I urge you to read the article for yourself.

Today we would like to address the nonprofit and program implementers and provide some practical and feasible tips, inspired by some of the issues he raises.

Jenn and I are evaluating a federally-funded teen pregnancy prevention program that for the last year, has been implemented at 5 community-based centers for teenage boys and girls. I also am the evaluator for several Drug-free Coalitions and Alcohol substance Abuse Prevention Programs, all of whom have a school component. Jenn serves as the evaluator for school-based sexual health education programs facilitated by a statewide training and advocacy organization.

As to be expected, nearly all programming and thus, data collection stopped mid-March. This left Jenn and I wondering, what the heck we were going to evaluate beyond the data we already collected this year?

The technical assistance from funders for the most part, included four specific questions:

  1. What were your intended enrollment numbers and what are your actual numbers, and what is are the reasons for these differences?
  2. What is the status of your programming and how has that changed?
  3. How has data collection changed (number pretests/number posttests) and how were participants affected- e.g. missed content, sessions provided out of order, etc.).
  4. How will the program use Continuous Quality Improvement (CQI) strategies to document and learn from the events?

What is missing is the story here, as Sridharan points out in his first question: “What are exemplars of good evaluation stories related to the adaptiveness/nimbleness of specific interventions.” Yes, we need to understand changes regarding what was planned versus what was done, but we need the why, in order to tell the entire story. When did they have to close their doors and why did they make that decision? What happened to staff and why? And as a result of the situation, were program staff able to pivot and if so, in what way?  For example, did program staff decide to shift from in-person to online delivery?

For one of my clients, they have shifted, rather nimbly I might say, to online meetings with their youth advisory committee. They are taking notes about their discussions and developing interventions that they can do online via social media. Similarly, another client that trains health and physical education teachers to implement comprehensive sex education offered to facilitate virtual lessons for one new district in an effort to ensure that students received this valuable information.                                                                                                                                      

Another of Sridharan’s questions is “Are there examples of evaluations that have taken a developmental approach to enhance the coordination at this time of the crisis?”  He observes that the “pandemic has highlighted the need to better understand the connections between the intervention and its underlying systemic contexts/supportive structures.” During a time of crisis, coordination can be improved, perhaps accelerated, or could also break down altogether.

Some school systems for example, have enlisted bus drivers, community volunteers and even local law enforcement to deliver food to students eligible through the National School Lunch Program. Some have expanded food distribution beyond those eligible through these federal programs. Other school systems have maintained the status quo, requiring families and guardians to drive to school with the eligible children present to collect the food distribution. Those without transportation, or without a large enough car to transport the whole family, were out of luck.                                                                                                                                 

There are a lot more gems to unpack (like the dynamics of vulnerability), but we will end with this question posed by Sridharan: “Can a focus on a minimal set of components needed to produce change help enhance a focus on meeting the needs of the disadvantaged given limited resources?”

For our teen pregnancy prevention program, we can’t even imagine where to start on a minimal set of components for this implementation fidelity evaluation. How do you deliver an evidenced-based, comprehensive teen pregnancy prevention program virtually? Is it even ethical to do so with parents, siblings in the next or same room? What about students without internet access, laptops or other devices or when access to devices must be shared by multiple youth?                                                                                             

Looking Forward

We are pretty sure that six months from now, funders will be asking nonprofits what happened? What is the program implementer to do? We think it’s critically important to document the changes programs made and the various ways in which the disruption impacted their organization and the people they serve. But program staff are busy people, especially during times of crisis. Evaluators can help the nonprofits they serve by helping staff document the changes they made and why they made them. Evaluators need to stress how this information will be useful when reporting to funders, partners, board members and others. The learning that comes from this process can help the organization plan for future disruptions.

We developed a guide to help in this process. Just let me know you want the guide and I will send it to you. Depending the needs of your client and their situation, these questions can be adapted in a variety of ways. You might want to change the order, eliminate some questions and add others. Do let us know what you think and if you find it useful. Stay safe and well!

Written by cplysy · Categorized: communityevaluationsolutions

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 261
  • Go to page 262
  • Go to page 263
  • Go to page 264
  • Go to page 265
  • Interim pages omitted …
  • Go to page 304
  • Go to Next Page »

Footer

Follow our Work

The easiest way to stay connected to our work is to join our newsletter. You’ll get updates on projects, learn about new events, and hear stories from those evaluators whom the field continues to actively exclude and erase.

Get Updates

Want to take further action or join a pod? Click here to learn more.

Copyright © 2026 · The May 13 Group · Log in

en English
af Afrikaanssq Shqipam አማርኛar العربيةhy Հայերենaz Azərbaycan dilieu Euskarabe Беларуская моваbn বাংলাbs Bosanskibg Българскиca Catalàceb Cebuanony Chichewazh-CN 简体中文zh-TW 繁體中文co Corsuhr Hrvatskics Čeština‎da Dansknl Nederlandsen Englisheo Esperantoet Eestitl Filipinofi Suomifr Françaisfy Fryskgl Galegoka ქართულიde Deutschel Ελληνικάgu ગુજરાતીht Kreyol ayisyenha Harshen Hausahaw Ōlelo Hawaiʻiiw עִבְרִיתhi हिन्दीhmn Hmonghu Magyaris Íslenskaig Igboid Bahasa Indonesiaga Gaeilgeit Italianoja 日本語jw Basa Jawakn ಕನ್ನಡkk Қазақ тіліkm ភាសាខ្មែរko 한국어ku كوردی‎ky Кыргызчаlo ພາສາລາວla Latinlv Latviešu valodalt Lietuvių kalbalb Lëtzebuergeschmk Македонски јазикmg Malagasyms Bahasa Melayuml മലയാളംmt Maltesemi Te Reo Māorimr मराठीmn Монголmy ဗမာစာne नेपालीno Norsk bokmålps پښتوfa فارسیpl Polskipt Portuguêspa ਪੰਜਾਬੀro Românăru Русскийsm Samoangd Gàidhligsr Српски језикst Sesothosn Shonasd سنڌيsi සිංහලsk Slovenčinasl Slovenščinaso Afsoomaalies Españolsu Basa Sundasw Kiswahilisv Svenskatg Тоҷикӣta தமிழ்te తెలుగుth ไทยtr Türkçeuk Українськаur اردوuz O‘zbekchavi Tiếng Việtcy Cymraegxh isiXhosayi יידישyo Yorùbázu Zulu