• Skip to main content
  • Skip to footer
  • Home

The May 13 Group

the next day for evaluation

  • Get Involved
  • Our Work
  • About Us
You are here: Home / Archives for cplysy

cplysy

Jul 29 2025

Teoría del Cambio: realista, contribución o basada en teoría

¿Qué enfoque metodológico usamos con nuestra Teoría del Cambio? Evaluación realista, análisis de contribución o evaluación basada en teoría

No existe una única forma correcta de utilizar la Teoría del Cambio (TdC). Todo depende de cómo evaluamos, para qué y con qué preguntas.

Introducción

Una Teoría del Cambio (TdC) es mucho más que un diagrama inicial. Puede ser una hipótesis causal para contrastar, una narrativa que articula mecanismos de cambio o una plataforma para aprender de la evidencia. Pero para que esto funcione, la TdC debe adaptarse al enfoque metodológico que adoptemos.

En muchas evaluaciones caemos en una trampa: diseñamos una TdC sólida al inicio y luego intentamos usarla igual sin importar el enfoque adoptado. El resultado: confusión, debilidad analítica y pérdida de valor.

En este artículo exploramos tres enfoques metodológicos clave y cómo cada uno utiliza la TdC de forma diferente —y también cómo podemos combinarlos de manera complementaria, cuando lo hacemos con claridad estratégica.

1. Evaluación realista: ¿Qué funciona, para quién y en qué condiciones?

Definición La evaluación realista (Pawson y Tilley, 1997) parte de la premisa de que los programas no funcionan igual en todos los contextos. No nos preguntamos solo “¿funciona?”, sino: ¿para quién funciona, en qué contexto, por qué mecanismo y con qué resultado?

Estructura C-M-R:

  • Contexto (C): condiciones que habilitan o restringen el cambio
  • Mecanismo (M): procesos internos que activan el cambio (motivación, incentivos, capacidades)
  • Resultado (R): efecto observable o esperado

Ejemplo: En municipios con liderazgo político comprometido (C), la inversión en prevención (M) genera apropiación comunitaria del modelo (R).

Cómo usamos la TdC:

  • Reestructuramos la TdC en configuraciones de Contexto-Mecanismo-Resultado
  • Analizamos cómo y por qué los mecanismos funcionan según el entorno
  • Validamos empíricamente las configuraciones que generan resultados

Ventajas:

  • Ideal para intervenciones complejas o sensibles al contexto
  • Fomenta el aprendizaje adaptativo
  • Responde preguntas de tipo “cómo” y “por qué” funciona una acción

2. Análisis de contribución: ¿Contribuyó el programa al cambio observado?

Definición Desarrollado por John Mayne (2011), el análisis de contribución busca estimar en qué medida una intervención aportó a un resultado, incluso si no puede atribuirse como única causa. Es útil cuando no es posible aplicar diseños experimentales.

Cómo usamos la TdC:

  • Convertimos la TdC en una narrativa lógica del cambio (“historia del cambio”)
  • Contrastamos esa lógica con evidencia empírica
  • Identificamos factores externos o alternativos que también influyeron

Ejemplo: ¿Cómo demostramos que el aumento en la cobertura de servicios se debe al modelo implementado, y no a reformas nacionales simultáneas?

Herramientas clave:

  • Revisión documental
  • Entrevistas con actores clave
  • Triangulación de fuentes y eliminación de explicaciones rivales

Ventajas:

  • Útil cuando no se puede establecer causalidad directa
  • Refuerza el argumento de que el programa fue parte del cambio
  • Promueve rendición de cuentas sin sobredimensionar efectos

3. Evaluación basada en teoría: ¿La evidencia valida nuestra lógica de cambio?

Definición Este es el enfoque más amplio y flexible (Weiss, 1995; Rogers, 2008). Parte de la premisa de que toda intervención parte de una teoría sobre cómo y por qué se espera lograr el cambio. La evaluación busca confirmar, ajustar o refutar esa lógica.

Cómo usamos la TdC:

  • Identificamos las relaciones causales centrales de la TdC
  • Recogemos datos para validar, matizar o ajustar cada vínculo
  • Reformulamos la TdC si la evidencia lo indica

Ejemplo: ¿Los municipios que recibieron formación muestran mejoras concretas en la toma de decisiones? ¿Qué datos respaldan ese vínculo causal?

Ventajas:

  • Flexible, aplicable en diferentes tipos de evaluación
  • Se adapta a proyectos en evolución
  • Refuerza el aprendizaje organizacional

¿En qué se diferencian realmente estos enfoques?

Aspecto Evaluación realista Análisis de contribución Evaluación basada en teoría
Pregunta central ¿Qué funciona, cómo y para quién? ¿Contribuyó el programa al cambio observado? ¿Confirma la evidencia nuestra lógica de cambio?
Lógica de análisis Configuraciones Contexto-Mecanismo-Resultado Historia del cambio + evidencia Validación empírica de hipótesis teóricas
Nivel de precisión Alta (mecanismos y contexto) Media (atribución razonada) Variable (según diseño y alcance)
Uso de la TdC Reestructurada como C-M-R Marco narrativo a contrastar Hipótesis causal a validar
Ideal para Intervenciones sensibles al contexto Programas con múltiples influencias Proyectos complejos o en evolución

¿Cómo y cuándo podríamos usar los tres enfoques de forma complementaria?

Usar más de un enfoque no es un problema si lo hacemos con claridad metodológica. La clave no es mezclar sin sentido, sino combinar de forma estratégica y coherente. Estas son algunas recomendaciones para lograrlo:

1. Comenzar por el propósito de la evaluación

Debemos preguntarnos: ¿Queremos entender cómo ocurre el cambio? ¿Identificar la contribución del programa? ¿Validar o ajustar nuestra lógica inicial? La respuesta guiará la elección del enfoque o su combinación.

2. Asignar a cada enfoque un rol distinto en el proceso

Podemos aplicar:

  • Evaluación basada en teoría para revisar y afinar la TdC inicial
  • Evaluación realista para explorar qué funciona en qué contextos
  • Análisis de contribución para valorar la participación del programa en los resultados observados

3. Diseñar una arquitectura metodológica clara

Conviene explicitar desde el diseño cuál enfoque usaremos en cada fase, con qué herramientas, con qué tipo de evidencia y para qué tipo de análisis. Esto mejora la calidad y utilidad de los resultados.

4. Mantener la TdC organizada

No debemos convertir la TdC en un gráfico confuso lleno de flechas, mecanismos, factores, actores e indicadores mezclados. Podemos apoyarnos en versiones auxiliares, esquemas parciales o matrices para facilitar su lectura y uso.

5. Ser realistas con tiempo y recursos

Cuando los recursos son limitados, es preferible aplicar bien un solo enfoque que intentar combinar tres sin claridad. Una evaluación rigurosa no siempre necesita más complejidad, sino mejores decisiones de diseño.

¿Qué pasa si los confundimos o los mezclamos sin lógica?

  • Si aplicamos el análisis de contribución sin examinar factores externos, el argumento será débil
  • Si usamos evaluación realista sin mapear bien los contextos, el análisis pierde fuerza
  • Si decimos que usamos una evaluación basada en teoría sin contrastar hipótesis, la TdC queda como un accesorio decorativo

Conclusión intermedia: Lo importante no es cuántos enfoques usamos, sino cómo los articulamos, qué rol le damos a cada uno y qué tan comprensible es el marco que construimos.

Conclusión: Una TdC útil necesita un enfoque claro (o una combinación bien pensada)

La Teoría del Cambio no se aplica igual en todos los enfoques. Para que sea útil, debe estar bien integrada al marco metodológico.

  • Si queremos entender mecanismos y cómo se activan: usemos evaluación realista
  • Si buscamos demostrar contribución razonable: usemos análisis de contribución
  • Si queremos validar, ajustar y aprender de nuestra lógica: optemos por evaluación basada en teoría

Y si decidimos combinar enfoques, hagámoslo con una lógica clara, asignando a cada uno una función específica dentro del diseño general.

Referencias

BetterEvaluation. (2023). Choosing an evaluation approach. Recuperado de https://www.betterevaluation.org/en/themes/approaches

Mayne, J. (2011). Contribution analysis: Addressing cause and effect. ILAC Brief No. 26. Institutional Learning and Change Initiative. Recuperado de https://idl-bnc-idrc.dspacedirect.org/handle/10625/48157

Pawson, R., & Tilley, N. (1997). Realistic evaluation. Londres: SAGE Publications.

Rogers, P. J. (2008). Using programme theory to evaluate complicated and complex aspects of interventions. Evaluation, 14(1), 29–48. https://doi.org/10.1177/1356389007084674

United Nations Evaluation Group (UNEG). (2022). Norms and standards for evaluation in the UN system. Nueva York: UNEG. Recuperado de https://uneval.org/document/detail/22

Weiss, C. H. (1995). Nothing as practical as good theory: Exploring theory-based evaluation for comprehensive community initiatives for children and families. En J. P. Connell, A. C. Kubisch, L. B. Schorr y C. H. Weiss (Eds.), New approaches to evaluating community initiatives: Concepts, methods, and contexts (pp. 65–92). Washington, DC: Aspen Institute.

Nota: Este artículo fue redactado con apoyo de inteligencia artificial, que también sugirió algunas de las referencias bibliográficas incluidas. Sin embargo, las ideas centrales, el enfoque y la selección final del contenido son completamente mías

Written by cplysy · Categorized: TripleAD

Jul 28 2025

Cómo hacer Útil la Teoría del Cambio en evaluaciones

“La Teoría del Cambio no debe quedarse en el papel. Su verdadero valor emerge cuando estructura el análisis y guía la toma de decisiones.”

Introducción: El dilema de la TdC decorativa

La Teoría del Cambio (TdC) ha ganado terreno como herramienta estratégica en el diseño y evaluación de programas. Desde UNICEF hasta USAID, pasando por GIZ, UNDP y fundaciones globales, todos la usan. Pero hay un problema frecuente: se elabora al inicio y luego se abandona.

La TdC a menudo queda como un gráfico bonito, un anexo más. Pero no se actualiza, no estructura hipótesis ni se conecta con la matriz evaluativa.

Entonces, ¿cómo lograr que la TdC sea realmente útil y no solo decorativa?

Este post ofrece reflexiones: cómo operativizar la TdC para que sirva como brújula metodológica en el análisis evaluativo, basada en experiencias y contrastada con buenas prácticas.

Estado del arte: evolución y aprendizajes clave

Referentes clave a nivel global:

  • Carol Weiss (1995): Introdujo la lógica de cambio como hipótesis a validar.
  • ActKnowledge & Aspen (2004): Proponen herramientas participativas para visualizar ToCs.
  • Mayne (2011): Desarrolla la Contribution Analysis.
  • Pawson & Tilley (1997): Creadores del enfoque realist, basado en contextos, mecanismos y resultados.

Estudios recientes:

  • OECD DAC (2019): Recomienda que las TdCs incluyan hipótesis explícitas y sean evaluables.
  • BetterEvaluation (2023): Enfatiza la conexión entre ToC, preguntas evaluativas y datos.
  • UNEG (2022): Exige integrar TdC con indicadores, supuestos y evidencia concreta.

Retos comunes al usar la Teoría del Cambio

Reto Consecuencia
TdC sin supuestos explícitos No permite probar hipótesis causales
Sin indicadores o evidencia esperada No se puede validar con datos
No vinculada con la matriz de evaluación Fragmentación metodológica
Desconectada del enfoque metodológico Bajo valor explicativo
No actualizada durante la evaluación Pierde utilidad como modelo vivo

Análisis: Cómo pasar de la TdC teórica a una TdñC útil

A partir de casos y experiencias, se proponen cuatro estrategias operativas, complementadas con aprendizajes de USAID, DFID y BetterEvaluation:

1. Enriquecer la TdC visual con densidad analítica

  • Añadir supuestos críticos por nivel de resultado (institucionales, socioculturales, políticos).
  • Incluir indicadores plausibles y líneas de evidencia.

Ejemplo:

Supuesto: “Las comunidades reconocen el valor del modelo.”

Indicador: % de actividades comunitarias implementadas con enfoque preventivo.

Recomendación global: BetterEvaluation sugiere adjuntar una tabla de supuestos y riesgos a cada ToC visual.

2. Construir una tabla de trazabilidad lógica

Permite alinear preguntas evaluativas (EQ) con:

  • Contexto, resultados y procesos de la TdC
  • Supuestos clave
  • Indicadores
  • Herramientas
  • Fuentes de datos

 Esto asegura coherencia analítica y cubre rutas de cambio relevantes.

GIZ recomienda los “Impact Pathways” y USAID las “Results Chains” para garantizar esta trazabilidad.

3. Reformular la matriz de evaluación

  • Identificar el resultado de la TdC evaluado en cada EQ.
  • Integrar supuestos y evidencia esperada.
  • Agrupar EQs por resultado TdC.

Esto facilita la validación causal y fortalece el análisis inferencial.

3ie (2021) sugiere matrices basadas en TdC como guía para hipótesis a contrastar.

Integrar metodológicamente la TdC

La TdC debe adaptarse al enfoque evaluativo usado. Aquí cómo hacerlo:

  • Realist Evaluation
  • Mapear Contextos (C) – Mecanismos (M) – Resultados (R)

Ej.: En municipios con liderazgo político (C), la inversión en prevención (M) genera apropiación (R).

  • Contribution Analysis
  • Usar la TdC como “story of change” a contrastar con evidencia.
  • Identificar factores contribuyentes alternativos.
  • Theory-Based Evaluation
  • Establecer cómo los datos refutan, ajustan o confirman la ToC.
  • Analizar resultados intermedios, no solo finales.

 Buenas prácticas globales para operativizar la TdC

Práctica Qué aporta Organismo
Revisión iterativa de la TdC Ajuste constante según evidencia UN Women, Save the Children
Enfoque en resultados intermedios Captura procesos invisibles GIZ, AFD
Co-construcción participativa Mejora apropiación local Oxfam, CARE
Uso combinado con Outcome Harvesting Detecta cambios no anticipados UNDP, FAO

Conclusión: La TdC debe ser brújula, no decoración

Una Teoría del Cambio bien diseñada, pero no usada, es como un GPS apagado: está, pero no guía.

  • Al integrarla como columna vertebral del análisis evaluativo, se logra:
  • Claridad de hipótesis
  • Evidencia útil y triangulada
  • Coherencia metodológica
  • Mayor valor para la toma de decisiones
  • Pero para lograrlo se necesita más que un buen diagrama. Se necesita:
  • Decisión técnica
  • Rigor metodológico
  • Diálogo real entre evaluación, planificación y datos.

¿Y tú, cómo usas la Teoría del Cambio?

 ¿Tu TdC está conectada con tu matriz evaluativa?

 ¿Tiene indicadores, supuestos y evidencia?

 ¿Te sirve para validar hipótesis o solo para justificar resultados?

 Déjanos tus experiencias o preguntas en los comentarios

Referencias

  1. 3ie. (2021). Designing Evaluations with a Theory of Change.
  2. BetterEvaluation. (2023). Managing Evaluations using Theory of Change.
  3. DFID. (2012). Review of the Use of “Theory of Change” in International Development.
  4. Mayne, J. (2011). Contribution Analysis: Addressing Cause and Effect. ILAC Brief.
  5. OECD DAC. (2019). Better Criteria for Better Evaluation.
  6. Pawson, R. & Tilley, N. (1997). Realistic Evaluation. SAGE.
  7. UNEG. (2022). Norms and Standards for Evaluation in the UN System.
  8. USAID. (2020). How-To Note: Constructing a Development Hypothesis.
  9. Weiss, C. H. (1995). Nothing as Practical as Good Theory. Evaluation Practice.

Written by cplysy · Categorized: TripleAD

Jul 23 2025

Try This: Leading Strategic Planning with a Social Work Lens

Social workers can lead strategic planning processes, and this activity helps you name the skills and experiences you bring to lead your organization’s next strategic planning process.

The post Try This: Leading Strategic Planning with a Social Work Lens appeared first on Nicole Clark Consulting.

Written by cplysy · Categorized: nicoleclark

Jul 22 2025

Beyond Traditional Evaluation: How Design-Driven Developmental Evaluation Transforms Innovation

A synthesis of Cense’s approach to evaluation for learning and strategic innovation

We’ve published many articles over the years on Developmental Evaluation, learning systems and how to integrate design thinking into the process and as part of the outcomes. In this article, we outline how we practice this and what it means to do strategic design with a lens to learning about how we innovate as we do it.

Innovation in complex human systems requires more than good intentions and clever ideas—it demands systematic ways to learn, adapt, and create value through intentional design. Over the years, our work at Cense has evolved from implementing developmental evaluation as an alternative approach to creating what we now call design-driven evaluation: an integrated methodology that weaves evaluation directly into the fabric of innovation and strategic development.

The Evolution of Our Thinking

From Evaluation to Strategic Learning Infrastructure

Traditional evaluation assumes stability—that the program being evaluated remains constant while users might change. Developmental Evaluation (DE) flips this assumption, recognizing that in complex systems, everything is in motion. Programs exist within dynamic environments where “the river I stand in is not the river I step in,” as Heraclitus reminds us.

But our understanding has deepened. We’ve moved beyond viewing DE as simply “evaluation for innovation” to recognizing it as fundamental infrastructure for organizational learning and strategic development. When properly embedded, evaluation becomes the nervous system of an organization—providing continuous feedback that enables real-time adaptation and learning.

The Design-Driven Evolution

This evolution led us to design-driven evaluation, where evaluation and design are intimately connected throughout the innovation process. Rather than bolting evaluation onto existing programs, we embed evaluative thinking from ideation through implementation, creating what we call “learning pathways” that capture insights at every critical juncture.

Design-driven evaluation recognizes that learning is both a journey and a destination. By approaching this journey through the lens of service design, we create systematic touchpoints for data collection and sensemaking that support continuous innovation and adaptation.

Core Principles That Guide Our Practice

Innovation as Learning Transformed into Value

At Cense, we define innovation as learning transformed into value through design. This isn’t about incremental improvement—it’s about fundamental development that creates new possibilities for organizations and the people they serve. Developmental evaluation provides the structured means to capture this transformation and guide it strategically.

Complexity as the Starting Point

We assume complexity from the beginning. Our evaluation frameworks are designed for organizations operating in conditions of volatility, uncertainty, complexity, and ambiguity (VUCA). This means creating evaluation systems that can adapt and evolve rather than rigid measurement frameworks that break under pressure.

Strategy and Evaluation as Inseparable

In our approach, evaluation is not separate from strategy—it’s a critical component of strategic development. This requires close integration between strategy development and evaluation, with bi-directional information flow ensuring that strategy informs evaluation and evaluation informs strategy simultaneously.

The Design-Driven Difference

Embedding Learning Throughout the Journey

Design-driven evaluation maps learning opportunities throughout service journeys rather than focusing solely on endpoints. By building evaluation layers onto service journey maps, we identify critical touchpoints where systematic data collection can provide insights into both process and impact.

This approach allows us to:

  • Identify activities and behaviours throughout the entire innovation journey
  • Provide user-centered perspectives on services and programs
  • Map the systems, processes, and relationships that users navigate
  • Anchor data collection to service transitions and critical decision points
  • Test and refine theories of change in real-time

From Reactive to Proactive Learning

Traditional evaluation often operates as a post-hoc assessment. Design-driven evaluation embeds feedback loops from the beginning, incorporating “the feedback, learning, and evidence-guided adaptation capacity into the fabric of the plan.” This transforms evaluation from a retrospective judgment into a proactive guidance system.

What This Means in Practice

The Three Essential Components

Successful developmental evaluation requires three integrated elements:

  1. Mindset: A complexity-ready orientation that embraces uncertainty and views programs as part of living systems
  2. Skillset: Facilitation, sensemaking, and developmental design capabilities that enable collaborative learning
  3. Toolset: Multi-method approaches that can capture the richness of complex system interactions

Resource Integration

Effective implementation requires both external perspective and internal knowledge. External evaluators bring the emotional and perceptual distance to see patterns hidden in plain sight, while internal evaluators provide crucial context and nuance. This collaborative model ensures both rigor and relevance.

Strategic Planning Integration

Design-driven evaluation transforms strategic planning by embedding learning mechanisms directly into implementation plans. Rather than creating static five-year plans, we develop adaptive strategies with built-in sensing and sensemaking capabilities that allow organizations to navigate uncertainty with confidence.

The Value Proposition

Evidence for Innovation

Without systematic evaluation, innovation remains wishful thinking rather than demonstrated impact. Our approach ensures that organizations can document not just what they’re doing, but what’s working, why it’s working, and how to replicate or scale successful innovations.

Quality Assurance for Design

Design-driven evaluation assesses not just whether programs achieve their intended outcomes, but whether they’re designed well enough to achieve positive impact in the first place. Many innovations fail not because they’re poorly implemented, but because they were never designed to succeed.

Organizational Learning Capacity

By embedding evaluative thinking throughout organizational processes, we help build internal capacity for continuous learning and adaptation. This creates resilient organizations that can thrive in complex, changing environments.

Looking Forward

As the complexity of challenges facing health and human service organizations continues to increase, the need for sophisticated learning systems becomes more critical. Design-driven developmental evaluation provides a framework for creating these systems—not as additional burden, but as essential infrastructure for innovation and impact.

The integration of design thinking with evaluative practice creates new possibilities for organizational learning and strategic development. By treating evaluation as a design challenge and embedding it throughout innovation processes, we transform evaluation from an external judgment into an internal capacity for wisdom and adaptation.


At Cense, we specialize in creating design-driven evaluation systems that support innovation, learning, and strategic development in complex organizational environments. Our approach combines rigorous evaluation methodology with innovative design thinking to create learning systems that guide transformation and demonstrate impact.

Ready to explore how design-driven developmental evaluation can support your organization’s innovation goals? Contact us to discuss how we can help you build systematic learning into your strategic development process.

Written by cplysy · Categorized: cameronnorman

Jul 22 2025

Does evaluation have a resource problem?

This post ended up being really long, with lots of data. If don’t feel like reading it, here is the TL/DR version.

  • Evaluation does indeed have a resource problem.
  • Government funding changes have made it worse.
  • I have a plan to do something about it.

I wrote a bit about watching so many of our peers losing their jobs back in February. Unfortunately that was not just a temporary thing. This has been a hard year for lots of evaluators and researchers. Especially for anyone who works in or with the US federal government. But I can’t help but wonder about the longer term implications that will result from this shock to the system.

Here is what I know.

I’ll let other experts talk more about federal and contractor jobs lost. And I know the American Evaluation Association has been doing research on the impact on the evaluation workforce. So I’ll focus on what I know best, evaluation on the web.

I started blogging before I started cartooning and before I started designing. It was 2009 and I was a data specialist in a non-profit just starting to learn about about evaluation. At that time blogging was in its hey day. Twitter was still young with 140 character limits and no images. Instagram had yet to launch.

It wasn’t long before influencers across all platforms started to move more into social media, podcasting, and video. And while blogging today is not irrelevant, it’s not the dominant form of personal online publishing that it once was.

And if you’re wondering what this has to do with anything, keep reading and I’ll show you.

The information behind the internet.

Google and ChatGPT are not magic. They are just tools that allow us to experience lots and lots and lots of information. But they require input. And the better the input, the better the output.

This is why early in my career I spent a lot of time trying to recruit new evaluation bloggers. My first evaluation conference presentation was a pitch for new evaluation bloggers (you can actually still watch it on YouTube). I even created a website (Eval Central) to help ensure that new bloggers had access to an initial audience so they wouldn’t get discouraged and stop blogging.

A few of these early evaluation bloggers are names you will certainly recognize (Stephanie Evergreen, Ann K Emery, Sheila Robinson, and many more). And even though many of us ending up blogging more about data visualization, reporting, and presentation design, evaluation was the starting point.

Of course, the fact that many of us did move away from evaluation in our posts does point to a central issue.

Our Internet Body of Work Problem.

Patricia Rogers launched Better Evaluation in October of 2012. This evaluation resource site was built around a central framework, it drew content from expert contributors, and created a lot of high quality evaluation resources.

Because of all that work, if you were to ask Google a question about evaluation, often you would find your way to a page on the Better Evaluation website. Just like if you were to look for an evaluation cartoon you would likely find your way here to Fresh Spectrum.

Like with academic literature, where the goal is to build upon a collective body of work, the internet is its own body of work. These two bodies of work do overlap but the fact that so many academic institutions and publishers paywall their publications, keeps that body of work mostly off the web.

What this means is that the importance of a resource library, like the one on Better Evaluation, should not be undervalued. And while there are many other resource libraries. Unfortunately for all of us, many of those resource libraries are just not good at publishing web friendly content. And even those that are, just don’t publish enough.

A simple analysis.

I’ve given you a lot of (informed) opinions thus far, but we’re data people. So let’s go deeper.

I put together a list of evaluation resource sites. Just to note, I tried to go pretty wide but I know I’m missing a good number of sites. If you notice any BIG evaluation resource websites missing from this list let me know in the comments.

This list includes university resource sites, some consultancies, international efforts, big association sites, and more. I also put my website in for context but generally avoided most personal blogs that write a lot about other topics.

There are 52 sites here.

  • AEA365
  • African Evaluation Association
  • American Evaluation Association
  • Australian Evaluation Society
  • BetterEvaluation
  • Brown School Evaluation Center
  • Canadian Evaluation Society
  • Center for Evaluation and Educational Effectiveness
  • Center for Evaluation Innovation
  • Center for Evaluation Research and Methodology
  • Center for Evaluation, Policy, & Research
  • Center for Program Evaluation and Quality Improvement
  • CREA
  • CRESST
  • Data for Impact Project
  • Equitable Evaluation
  • European Evaluation Society
  • Eval Academy
  • Eval and Ink
  • EvalCentral
  • EvalCommunity
  • EvalPartners
  • Evaluation, Assessment, and Policy Connections
  • EvalYouth
  • Expanding the Bench
  • FreshSpectrum
  • German Institute for Development Evaluation
  • Global Evaluation Initiative
  • Harvard Family Research Project
  • IDEA Data
  • Impact Entrepreneur
  • Indigenous Evaluation Network
  • Innovation Network
  • Institute for Assessment and Evaluation
  • International Program for Development Evaluation Training
  • MEASURE Evaluation
  • MERL Tech
  • Office of Assessment, Evaluation, and Research Services
  • Participatory Evaluation Network
  • Regional Educational Laboratories
  • TCC Group
  • The Evaluation Center
  • The Evaluation Group
  • The Social Research and Evaluation Center
  • Tools for Development
  • U.S. Government Evaluation Portal
  • UK Government Evaluation
  • UN Evaluation Office
  • USAID Evaluation
  • USAID Learning Lab
  • Utilization-Focused Evaluation
  • What Works Clearinghouse

To gather the data I used a tool called Uber Suggest which looks at search engine data.

I only looked at two numbers. Domain or Page Authority AND Estimated Traffic for the month of June.

SEO Authority (i.e. Domain/Page Authority): is a composite rating that takes account of backlinks (websites that link to that website) and search placement (where a website shows up when someone searches for a keyword on Google). It’s a proxy for how SEO people guess Google treats website authority.

Estimated Traffic: Estimated traffic is based on the appearance and placement of the website in specific keyword searches. It then estimates the number of people who likely clicked on that link based on placement and the number of people generally searching for that term.

Most evaluation sites receive fairly low amounts of search engine traffic.

Not all sites are designed for search. Some resource sites will rely pretty heavily on email newsletters or other means to connect with their readers.

But generally, when it comes to web searches, a lot of resource sites get only a little bit of traffic through search.

Only 10 of the 51 sites (not including my own) likely had more than 3K visits last month from search.

These websites include Better Evaluation, Eval Academy, Eval Community, the American Evaluation Association, the Canadian Evaluation Society, and Tools for Development.

4 out of those ten have either been shut down or are at risk of being shut down due to changes in federal budget priorities.

The USAID Learning Lab is completely gone. Visiting the site will get you an error message. Note: In order to get some traffic data I used February estimates (instead of June) for this analysis. This is the only exception I made.

MEASURE Evaluation and Data for Impact are both still live. They are run by the Carolina Population Center at the University of North Carolina Chapel Hill. BUT…it is clear on both websites that they are the products of USAID funding.

Data for Impact’s last blog post was in November of 2024. The last updates I found on MEASURE Evaluation were from 2021. It’s possible that the site is one of many legacy websites produced with funding from the US government but not in continued operation.

The What Works Clearinghouse website run by the US Department of Education’s Institute of Education Sciences last news article was published in December of 2024. And as of March, IES is down to 20 federal employees, from 175 at the end of last year.

How does a website that rarely gets updated, maintain an estimated 20K views a month from search?

If you’re still scratching your head at how search statistics work, this might help.

As you can see, MEASURE evaluation has been able to accumulate an amazing number of backlinks over the years. Again, this is the number of links you’ll find on other websites that lead back to Measure. Because of that, it has pretty high domain authority.

Compared to other sites, 20K visits sounds pretty good. But that’s actually a low point for this site. Back in December of 2023 it had an estimated 71K visits.

So what does this all mean to a regular evaluator searching for information on the web?

Let’s say you type into Google “Evaluation News.” Based on the SEO Keywords you are likely going to see a link from MEASURE evaluation come up first in the list of non-promoted, non-AI, links.

Measure is also going to be the top source for most people looking for “Data Quality Assessment Tools.”

As long as the website continues to be live, and as long as it’s still useful, these posts will likely stay high in web searches. That is unless another agency or individual writes a better article.

What happens when we completely lose a website?

As I mentioned before, USAID Learning Lab was taken completely offline. If you go to visit the website, this is what you’ll get.

Let’s go ahead and look it in Uber Suggest. Even though it’s gone, it still shows pretty high domain authority.

But if we look at the traffic you’ll see it goes down to nothing from a high of 38K visits in August of 2024 (just a year ago).

Now let’s look at the backlinks. According to UberSuggest it has 97,690 from 3,667 unique web domains.

But let’s dig down and look at those individual links. One of the first ones that comes up is a link on the Wikipedia Page for Learning Organization.

That page still has that link as being live, along with AgriLinks, DRGLinks, Edulinks, and ResilienceLinks. All of these links are now dead ends.

Let’s look at another. Here is a link from a Report to Congress by the State Department on Evaluation Quality, Cost, and other matters.

Here is what you’ll find in the text of that report.

USAID: USAID is recognized as a leader in evaluation among federal agencies. USAID’s evaluation policy (note: this page is also dead as it lives on the USAID website) sets specific guidelines for how to conduct high quality performance and impact evaluations. USAID has a publicly available evaluation toolkit which provides step-by-step resources to staff designing and managing evaluation to ensure that they are of high quality.

It sounds like a pretty cool evaluation toolkit, too bad the link is now dead.

You may suggest that we could go back and use the internet archive’s wayback machine to recover some of these missing posts and documents.

And while that’s true, unless action is taken by the holder of the domain/website, those missing posts and documents will never be connected to the appropriate backlinks.

The loss of that one website just made 97,690 links dead ends.

Is this an opportunity for evaluation bloggers?

Technically, yes. It is an opportunity to fill gaps in the body of work that is evaluation on the web.

Only one problem. We have fewer evaluation bloggers now. And many of the highest authority bloggers have moved into parallel fields.

On the American Evaluation Association website there is a page that lists 92 evaluation blogs. That sounds good, except that the page is not really updated…ever.

Only 34% of evaluation blogs remain active, with 17% posting monthly (highly active) and 16% posting quarterly (moderately active). Nearly half (45%) are inactive, having not posted in over two years. Additionally, 16% have changed domains or moved to different platforms, while 5% could not be located at all.

And of those 34% of evaluation blogs that remain active are several blogs that rarely ever talk about evaluation.

What about our existing resource sites, like Better Evaluation?

Before I get into this, let me note something. I think Better Evaluation is the field’s best designed resource site. The internet is constantly changing through big shifts (hello AI) and little ones (periodic Google search algorithm changes). In that way, big changes in traffic could have absolutely nothing to do with organizational changes.

In other words, this is not a criticism of Better Evaluation’s new management, just a little analysis of the available data.

Better Evaluation became part of the Global Evaluation Initiative back in November 2022. Patricia Rogers, original founder and CEO, stepped down in August of 2021 after a decade at the helm.

So what did that mean for one of the biggest resources in the evaluation world?

First, a reduction in content. You can see it in the blog archives. In the three full years after Patricia left, Better Evaluation posted 28 blogs. In the three full years before Patricia left, the site posted 50 blogs.

The Better Evaluation YouTube page shows us a pretty steep decline in videos posted. Better Evaluation lead a number of webinars and then posted those webinars to YouTube. But there hasn’t been a new webinar for 3 years.

The site was never a video first website, it was a resource website. And as you have seen from the data above, it is still one of the most popular. But even as a resource site, we also see a huge drop in traffic since 2023.

So where are we now as a field?

It’s true that certain parallel fields (such as data visualization) seem to have become more popular than program evaluation. But at the same time, evaluation is more popular on the web now than it has been over the last 20 years. To see some of those relevant changes, just head to Google trends.

Sometime in the middle of the 2010s, data visualization traded spots with program evaluation in terms of relative search popularity.
If you isolate search terms and just look at “evaluation jobs,” we hit a search high point in October of 2024.
Looking at evaluation resources we also hit a high point in October of 2024.
Same is true for searches on the words, “evaluation support.”

In case I haven’t made it clear.

I believe the field of evaluation has a resource problem. There is still demand for good resources, but less supply actually showing up in web searches. And if this is important to us as a field, I don’t think we can rely totally on amateur bloggers and creators to fill the gap.

Relevance in the Digital Age.

I’ve found that most people who do evaluation do not consider themselves evaluators (I call them evaluation-doers).

They are the program managers, non-profiters, researchers, health workers, and all sorts of other people who do evaluation work. They often don’t have lots of evaluation experience or academic training in evaluation, and becoming an evaluator is not their goal. So we can’t rely on associations or universities for everyday support.

They learn evaluation like people learn anything else, they go to Google. They type in questions like, how to do an evaluation for a grant, what is a developmental evaluation, how do I get better survey response rates, or how do I plan a focus group?

The quality of the sites that come back from the search will have an almost direct impact on the quality of data work done by the searchers. Showing up in those searches is one way for our field to stay relevant in the digital age. And we’re losing ground.

My new plans for Eval Central.

Here we are, full circle.

I’ve decided to do something about the gap in evaluation content. Because I know that we evaluators DO have agency in what shows up in web searches and on YouTube.

I plan to rebuild Eval Central. This time as a Community of Practice and Capacity Building Hub.

The other thing I’m doing this time is seeking funding BEFORE I launch.

My plan for the Eval Central CoP is a scaling up of a model I’ve been developing with the CDC for over the past 5 years.

And if it gets funded, here is what you can expect.

  • Open (FREE) to ALL evaluators across the globe.
  • Premium (FREE) evaluation training courses.  
  • Monthly webinars (FREE) & office hours (FREE).
  • Designed for evaluation practitioners.
  • Also designed for people who do evaluation work but don’t call themselves evaluators (evaluation-doers).
  • Built on a modern community platform designed to support Peer to Peer connection.
  • Led by me, Chris Lysy of frehspectrum.com

Think Rachel Ray, NOT Le Cordon Bleu.

Now I need a favor. I’ve built a proposal but it would be helpful if I could show that there is interest in this kind of effort. So I also built a Community Waitlist.

If you are interested in this FREE community. Click on the link and add your email.

And if you know anyone at a philanthropy, business, or other organization that might be willing to help sponsor this new community, can you put them in touch with me?

In addition to sponsoring something that will provide a LOT of evaluation community value. I also have some pretty valuable sponsor incentives to offer.

Just send me an email or schedule a time to chat via my calendar (https://calendly.com/clysy/30min).

Written by cplysy · Categorized: freshspectrum

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 5
  • Go to page 6
  • Go to page 7
  • Go to page 8
  • Go to page 9
  • Interim pages omitted …
  • Go to page 304
  • Go to Next Page »

Footer

Follow our Work

The easiest way to stay connected to our work is to join our newsletter. You’ll get updates on projects, learn about new events, and hear stories from those evaluators whom the field continues to actively exclude and erase.

Get Updates

Want to take further action or join a pod? Click here to learn more.

Copyright © 2026 · The May 13 Group · Log in

en English
af Afrikaanssq Shqipam አማርኛar العربيةhy Հայերենaz Azərbaycan dilieu Euskarabe Беларуская моваbn বাংলাbs Bosanskibg Българскиca Catalàceb Cebuanony Chichewazh-CN 简体中文zh-TW 繁體中文co Corsuhr Hrvatskics Čeština‎da Dansknl Nederlandsen Englisheo Esperantoet Eestitl Filipinofi Suomifr Françaisfy Fryskgl Galegoka ქართულიde Deutschel Ελληνικάgu ગુજરાતીht Kreyol ayisyenha Harshen Hausahaw Ōlelo Hawaiʻiiw עִבְרִיתhi हिन्दीhmn Hmonghu Magyaris Íslenskaig Igboid Bahasa Indonesiaga Gaeilgeit Italianoja 日本語jw Basa Jawakn ಕನ್ನಡkk Қазақ тіліkm ភាសាខ្មែរko 한국어ku كوردی‎ky Кыргызчаlo ພາສາລາວla Latinlv Latviešu valodalt Lietuvių kalbalb Lëtzebuergeschmk Македонски јазикmg Malagasyms Bahasa Melayuml മലയാളംmt Maltesemi Te Reo Māorimr मराठीmn Монголmy ဗမာစာne नेपालीno Norsk bokmålps پښتوfa فارسیpl Polskipt Portuguêspa ਪੰਜਾਬੀro Românăru Русскийsm Samoangd Gàidhligsr Српски језикst Sesothosn Shonasd سنڌيsi සිංහලsk Slovenčinasl Slovenščinaso Afsoomaalies Españolsu Basa Sundasw Kiswahilisv Svenskatg Тоҷикӣta தமிழ்te తెలుగుth ไทยtr Türkçeuk Українськаur اردوuz O‘zbekchavi Tiếng Việtcy Cymraegxh isiXhosayi יידישyo Yorùbázu Zulu