• Skip to main content
  • Skip to footer
  • Home

The May 13 Group

the next day for evaluation

  • Get Involved
  • Our Work
  • About Us
You are here: Home / Archives for allblogs

allblogs

Sep 07 2025

Perspectivas Modernas en Evaluación de Programas: Un Enfoque Inclusivo

Como continuacion del post anterior Fundamentos de la Evaluación de Programas: Un libro esencial, dentro de la serie Libros de Evaluación, vamos a desarrollar cómo el modelo revolucionario de «Fundamentos de la Evaluación de Programas» de Shadish, Cook & Leviton se enriqueció desde su hito fundacional (1991) con los debates que lo rebasaron.

1. El legado que no caduca: ¿por qué las cinco familias de evaluación sigue siendo su sección más citada?

Como tratamos en nuestro anterior post Fundamentos de la Evaluación de Programas: Un libro esencial, Shadish, Cook y Leviton (1991) revolucionaron la evaluación al proponer cinco familias teóricas (experimental, descriptiva, centrada en el uso, en valores y en contexto). Fue un mapa epistémico común que:

  • Ordenó un campo fragmentado, funcionando como lenguaje pedagógico y referencia en programas académicos.
  • Actuó como puente entre diversidad y unidad, validando la pluralidad en lugar de imponer un modelo único.
  • Se convirtió en un “esqueleto del campo” citado en manuales, cursos universitarios y artículos de revisión.

2. Más allá del mapa: perspectivas que lo rebasan

Sin embargo, hubo perspectivas posteriores que la completaron o complementaron:

a) Evaluación culturalmente sensible y decolonial (CRE / Kaupapa Māori)

  • Hood, Hopson y Kirkhart (2015) mostraron que la evaluación debe integrar cultura y justicia social.
  • El enfoque Kaupapa Māori (Cram, 2016) desafía los marcos universales y propone prácticas construidas desde y para comunidades originarias.

📖 Lecturas :

  • Hood, Hopson & Kirkhart (2015): Culturally Responsive Evaluation – Experts Illinois
  • Cram (2016): Lessons on Decolonizing Evaluation from Kaupapa Māori – University of Toronto Press

b) Paradigma transformativo y justicia social

  • Donna M. Mertens (2007) propone un paradigma transformativo que conecta investigación y evaluación con derechos humanos, participación significativa y reducción de desigualdades.
  • Afirma que cada decisión metodológica es también ética y política.

📖 Lectura :

  • Mertens (2007): Transformative Paradigm: Mixed Methods and Social Justice – Sage Journals

c) Pensamiento sistémico y complejidad

  • Williams & Hummelbrunner (2010) ampliaron el horizonte con el pensamiento de sistemas, ofreciendo herramientas para problemas enredados y contextos adaptativos.
  • En lugar de encasillar familias, permiten combinaciones híbridas y aprendizaje adaptativo.

d) Práctica empoderadora y orientada al uso

  • Se criticó a Shadish et al. (1991) por ser muy teórico.
  • En respuesta, surgieron modelos como Utilization-Focused Evaluation (Patton, 1997, 2008) y Empowerment Evaluation (Fetterman & Wandersman, 2005) que ofrecen guías prácticas y control por parte de usuarios.

3. Línea evolutiva

1991 – Mapa fundacional (Foundations of Program Evaluation)

1990s – Primeras aproximaciones culturalmente sensibles (raíces de CRE en debates de los 90; consolidación académica más fuerte en 2015 con Hood et al.)

2007 – Paradigma transformativo (Mertens)

2010 – Pensamiento de sistemas (Williams & Hummelbrunner)

2010s–2020s – Decolonial y Kaupapa Māori (Cram, 2016 en adelante)

Actualidad – Empoderamiento y usabilidad (Patton, Fetterman, revisiones contemporáneas)

4. Cierre

El libro de 1991 sigue siendo una estación histórica. Pero hoy el tren avanza con nuevos vagones: cultura, justicia, sistemas, empoderamiento.

La pregunta que queda: ¿Te quedas en el vagón fundacional o avanzas hacia los nuevos rieles de una evaluación más situada y transformadora?

Referencias

  • Cram, F. (2016). Lessons on decolonizing evaluation from Kaupapa Māori evaluation. Canadian Journal of Program Evaluation, 30(3), 296–313. https://utpjournals.press/doi/abs/10.3138/cjpe.30.3.04
  • Fetterman, D. M., & Wandersman, A. (2005). Empowerment evaluation principles in practice. Guilford Press.
  • Hood, S., Hopson, R., & Kirkhart, K. (2015). Culturally responsive evaluation. In K. Newcomer, H. Hatry, & J. Wholey (Eds.), Handbook of practical program evaluation (4th ed., pp. 281–317). Jossey-Bass. https://experts.illinois.edu/en/publications/culturally-responsive-evaluation
  • Mertens, D. M. (2007). Transformative paradigm: Mixed methods and social justice. Journal of Mixed Methods Research, 1(3), 212–225. https://journals.sagepub.com/doi/10.1177/1558689807302811
  • Patton, M. Q. (1997). Utilization-focused evaluation (3rd ed.). Sage.
  • Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.). Sage.
  • Shadish, W. R., Cook, T. D., & Leviton, L. C. (1991). Foundations of program evaluation: Theories of practice. Sage.
  • Williams, B., & Hummelbrunner, R. (2010). Systems concepts in action: A practitioner’s toolkit. Stanford University Press.

Nota: Este artículo fue redactado con apoyo de inteligencia artificial, que también sugirió algunas de las referencias bibliográficas incluidas. Sin embargo, las ideas centrales, el enfoque y la selección final del contenido son completamente mías

Written by cplysy · Categorized: TripleAD

Sep 06 2025

Fundamentos de la Evaluación de Programas: Un libro esencial

En esta parada del Tren Libros de Evaluación nos encontramos con un clásico que cambió la forma de entender la disciplina: Foundations of Program Evaluation. Theories of Practice (1991), de William R. Shadish, Thomas D. Cook y Laura C. Leviton.
Un libro que buscó organizar el “caos” de teorías que existían hasta ese momento y dar coherencia a la práctica de evaluar programas sociales, educativos y de políticas públicas.


1. Resumen general

El libro propone una visión integradora de la evaluación, sistematizando décadas de debates. Su objetivo principal: conectar teoría y práctica, mostrando que la evaluación no es una técnica única, sino un campo plural con múltiples marcos conceptuales.


2. Análisis de capítulos

La obra está organizada en tres bloques:

  1. Fundamentos: qué es la evaluación, sus propósitos y dilemas.
  2. Revisión de teorías: analiza cinco grandes familias:
    • Experimentales (con fuerte herencia de Donald T. Campbell).
    • Descriptivas/cualitativas (inspiradas en Lee Cronbach).
    • Centradas en el uso (con influencias de Carol Weiss).
    • Centradas en valores (a la Michael Scriven).
    • Centradas en el contexto y la toma de decisiones (Daniel Stufflebeam y el modelo CIPP).
  3. Integración y práctica: cómo elegir enfoques adecuados al contexto y cómo los evaluadores deben equilibrar rigor, valores y utilidad.

3. Temas y mensajes principales

  • La pluralidad de teorías como riqueza, no debilidad.
  • Evaluar es siempre un acto social y político, además de técnico.
  • Mensaje central: no existe una única receta, sino un menú de enfoques que deben adaptarse.

4. Innovación y valor añadido

La novedad fue articular en un marco coherente teorías hasta entonces dispersas. El valor añadido: darle estatus académico a la evaluación y mostrarla como disciplina con fundamentos sólidos, más allá de ser una práctica administrativa.


5. Utilidad práctica

En la vida diaria o el trabajo, este libro recuerda que:

  • No todos los métodos sirven para todo.
  • La evaluación debe ser útil, contextual y crítica.
    Consejo práctico: antes de aplicar un cuestionario o un experimento, pregúntate si responde a las preguntas reales de los usuarios del programa.

6. Críticas y opiniones

Se le critica ser denso y exigente. Pero también se le reconoce como “la gran síntesis” de la teoría de la evaluación. Es uno de esos textos que aparecen en casi todas las bibliografías de cursos universitarios.


7. Comparación con otras obras

A diferencia de libros más prácticos (como Patton o Rossi), el de Shadish et al. es más teórico y filosófico. Frente a otros trabajos de los mismos autores, esta es su obra más integradora y ambiciosa.


8. Recomendaciones personalizadas

Si disfrutaste este libro, te gustarán:

  • Utilization-Focused Evaluation (Patton, 1997).
  • Evaluation Theory, Models, and Applications (Stufflebeam & Shinkfield, 2007).

9. Impacto cultural y social

  • En los años 90 se convirtió en manual de referencia en programas de posgrado en EE. UU. y Europa.
  • Desde 2000, fue pieza clave en consolidar la evaluación como campo académico independiente (la American Evaluation Association lo cita como uno de los textos más influyentes).
  • Su marco inspiró a instituciones como la OCDE y el Banco Mundial, que adoptaron perspectivas más integradas de evaluación.

10. Ediciones y/o versiones

No tiene reediciones ampliadas, pero sus ideas nutren artículos y manuales posteriores. Hoy se complementa con enfoques que en 1991 estaban menos desarrollados: evaluaciones participativas, basadas en equidad o justicia social.


11. Fuentes y citas

Los autores bebieron de varias corrientes y figuras clave:

  • Donald T. Campbell → rigor experimental y cuasi-experimental.
  • Michael Scriven → evaluación basada en valores.
  • Carol Weiss → evaluación centrada en el uso.
  • Peter Rossi → investigación social aplicada y programas sociales.
  • Lee Cronbach → visión cualitativa y contextual.
  • Daniel Stufflebeam → modelo CIPP de evaluación para la toma de decisiones.

Según Google Scholar (2025), el libro supera las 9,000 citas académicas. La sección más citada es la clasificación de las cinco familias de teorías de evaluación (Capítulos 3 a 7), donde los autores organizan el campo y proponen cómo estas teorías se relacionan entre sí. Este marco conceptual es la aportación más influyente y aparece citado textualmente en manuales, artículos y programas formativos.


12. Pequeña bio de los autores

  • William R. Shadish (1949–2016): psicólogo y metodólogo, discípulo de Donald Campbell, pionero en la integración de métodos experimentales y teoría de evaluación.
  • Thomas D. Cook (n. 1942): reconocido por su obra en métodos cuasi-experimentales y su aplicación en políticas sociales.
  • Laura C. Leviton: psicóloga y evaluadora, vinculada a la Robert Wood Johnson Foundation, enfocada en salud pública y programas sociales.

Reflexión final

Foundations of Program Evaluation es el mapa conceptual por excelencia de la evaluación. Puede ser desafiante, pero también es inspirador: un recordatorio de que evaluar no es solo medir, sino pensar, juzgar y actuar en contextos complejos.

En TripleAD lo vemos como una estación imprescindible en el viaje del Tren Libros de Evaluación: donde bajas, revisas tu brújula, y vuelves al camino con más claridad.


Referencias

  • Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Houghton Mifflin.
  • Cronbach, L. J. (1982). Designing evaluations of educational and social programs. Jossey-Bass.
  • Patton, M. Q. (1997). Utilization-focused evaluation (3rd ed.). SAGE.
  • Rossi, P. H., Freeman, H. E., & Lipsey, M. W. (1979). Evaluation: A systematic approach. SAGE.
  • Scriven, M. (1967). The methodology of evaluation. In R. E. Stake (Ed.), Curriculum evaluation (pp. 39–83). Rand McNally.
  • Shadish, W. R., Cook, T. D., & Leviton, L. C. (1991). Foundations of program evaluation: Theories of practice. SAGE.
  • Stufflebeam, D. L. (1971). The relevance of the CIPP evaluation model for educational accountability. ERIC.
  • Stufflebeam, D. L., & Shinkfield, A. J. (2007). Evaluation theory, models, and applications. Jossey-Bass.
  • Weiss, C. H. (1972). Evaluation research: Methods of assessing program effectiveness. Prentice Hall.

Nota: Este artículo fue redactado con apoyo de inteligencia artificial, que también sugirió algunas de las referencias bibliográficas incluidas. Sin embargo, las ideas centrales, el enfoque y la selección final del contenido son completamente mías

Written by cplysy · Categorized: TripleAD

Sep 03 2025

Understanding AI: Intelligence versus Wisdom

Summary: This new series is a kind of travelogue. It’s a way for me to document my experiences with AI. Today’s question, can an AI tool be wise?

Missing point of view.

I think it’s important to find ways to differentiate how conversational AI works compared to human beings. Because there are times when it seems frightfully similar.

One area is perspective.

Right now pretty much all the leading conversational AI tools are built to have a kind of generic point of view. They’re all trying to be helpful and guide their users through their own discovery process. This isn’t the way people work. Not only do we have knowledge, but we also have a point of view. That point of view is developed over time through experience and socialization.

These different perspectives are part of make diversity so important. Different people with different perspectives will take the same information and come up with different interpretations.

You can actually mimic this through a chat with your favorite AI tool. Ask the tool to take on the persona (or roleplay) as someone. It could be a general persona, like an experienced plumber or a high school social studies teacher. Or it could be a well known person, like an author or public figure. Then ask it a series of questions.

The answers to those questions will change based on the role you give the AI tool, at least to a point. AI is just roleplaying, and sometimes it goes a little overboard. For example here is a prompt where I asked Claude to respond to a basic question as if it were Richard Feynman.

Prompt: Can you give me a definition of evaluation but do so as if you were Richard Feynman?

Claude Sonnet 4:
adjusts in chair and grins

Alright, so you want to know what “evaluation” means? Let me tell you something – most people make this sound terribly complicated when it’s really quite simple.

Evaluation is just figuring out if something is any good or not. That’s it! You’ve got some thing – could be a student’s paper, could be a new theory about atoms, could be whether your car mechanic did a decent job – and you want to know: is this thing worth a damn?

This is just an example from a conversation with Claude.ai, the AI kept going for a bit.

Wisdom is more than a collection of knowledge.

So, most AI tools have access to a lot of information and can generally handle a lot more info than a human can before getting overwhelmed (at least in some respects since they don’t see, or feel, or think about other stuff at the same time, like a human would).

But wisdom is much more than just access to a collection of knowledge. To make it easy to discuss, let’s simplify with a fake equation. Wisdom = Point of View + Knowledge + Time

For an AI tool, Knowledge is generated through a specific training database and a specific training approach.

The default Point of View is the result of the information it was trained on, and the design choices made by/selected by the model development team.

Time is mostly just a snapshot. At least by the time we experience the tool.

A human point of view is going to change considerably over time, ultimately shifting how information is viewed.

Could an AI model be built with wisdom? Perhaps, at least it could imitate wisdom. But I think it’s going to be one of the areas where humans will continue to have an edge.

The race towards the best average.

Right now there are several tools locked up in a very weird race. They are all trying to be more useful than all the other tools at doing all the things. Basically, it’s a race towards being the best average.

I can’t see it staying like this.

There is this story about cockpit design in airplanes. It goes like this, early on in plane design, the air force tried to design a jet cockpit around the idea of an average pilot. The problem is that there is no average pilot. So by creating a cockpit for the average person, you are essentially creating a cockpit for nobody. The better way forward is to build an adjustable cockpit, so you can meet the needs of different people.

This is starting to happen in AI, with different conversational AI models being offered at the same time each trained towards different uses. My guess is that over time we’ll see an increase of specialized AI, built from specific training libraries and trained to serve different needs.

That’s all I got for today. What have you discovered about AI lately?

Written by cplysy · Categorized: freshspectrum

Sep 03 2025

Ask Nicole: How Can We Make Research Participation More Accessible?

Making your research accessible means more than translating a survey—it’s about trust, timing, and truly meeting people where they are. In this Ask Nicole post, we dig into what real accessibility looks like in community-centered research.

The post Ask Nicole: How Can We Make Research Participation More Accessible? appeared first on Nicole Clark Consulting.

Written by cplysy · Categorized: nicoleclark

Sep 02 2025

Virtual Community of Practice: Quick Launch Guide

Summary: This guide will walk you through five steps for launching a simple but effective virtual community of practice.

Running a successful Community of Practice can take a significant amount of time and effort. That is if you want actual connections to form between community members. But contrary to popular belief, the biggest challenge is not technical. It’s also not about creating the learning content. The significant time and effort required is in the longer term community building.

Bottom line, you can launch a CoP quickly. In this guide I’ll walk you through a set of five basic steps to get you started.

Step 1. Frame your community.

It won’t be an actual community until your people get to know each other. If they already do, great. If not, that will be your job.

Who are your people? What practice do they have in common? Be really specific about who is meant to be in your community, and who is not.

I run an evaluation community of practice for the CDC’s Overdose Data to Action (OD2A) program. The people in the community are the people charged with evaluating the OD2A program. You don’t have to call yourself an evaluator to be in the CoP. This is a known group. While people do come and go a bit, we already have most of their names and emails already. This makes our community framing easier.

If you have to build your community first, start with a Google form. Ultimately you’ll need a list of names and emails (everything else is just icing on the cake).

Step 2. Start with the most basic CoP platform.

I’ve tried just about all the well known community platform things over the last 20 years (Slack, Buddypress, Circle, Discourse, Discord, Ning, Sharepoint, etc.). You know what the best platform is?

An email newsletter and a Zoom account.

That’s it. It’s cheap and effective. If you want to replace the Zoom account with a Teams account or something else, that’s fine. If you have a small group and just want to send regular emails, that’s fine too.

There are exceptions, but most professionals don’t need another website they have to visit. It’s far easier to get them to block an hour off their calendar each month than try to get them in the habit of visiting some kind of web platform they may or may not feel comfortable using.

Step 3. Meet regularly.

I suggest monthly. You can try staggering times if you need to but it’s easier to just have a specific day and time each month (ex. third Wednesday at 2PM eastern).

Things can get complicated quickly, so at the very least, try not to start complicated.

Step 4. Recruit speakers from within your community.

The power of a community is in its members. One of the mistakes people make is in spending a lot of time building a series of interesting lectures with outside experts that they think will be of interest to the community. But that’s not community building, it’s audience serving.

Having your community members deliver the session content is a win win. Not only do you get highly practical instruction (which is what a CoP is designed to offer), you also get an opportunity to showcase individual members.

I like to approach my sessions in blocks. If I have 60 minutes, that’s three 20 minute blocks. This means we can have an intro (first 20) and two talks (15 minutes of talk, 5 minutes of Q&A). Or we can have an intro (first 20), talk (second 20), and some time for breakout groups (third 20).

Step 5. Encourage connection.

When we meet in person there are all sorts of opportunities for people to connect. People might connect by just sitting next to each other at a table or waiting together in the hall. It can help to have someone connect people, but it’s not always essential. Especially if you have a chatty group or a few extroverts.

Now, if you meet virtually, that’s a different story. It’s really easy as a virtual attendee to take on a passive role. Perhaps you just watch the webinar while eating your lunch with your camera off.

It takes a lot more effort to forge connections between community members in a virtual setting. So you have to be intentional. Occasional breakout groups can help a little, but ultimately, the facilitator should play a kind of matchmaker role.

Last Thoughts.

I said at the beginning of this guide that building an effective virtual community of practice takes a lot of time and effort. It also takes a bit of emotional effort.

One of the hardest things about facilitating an online community is that most people instinctively lurk before they participate. It can be unnerving to lead a presentation without being able to see the faces of your audience.

But if you source your expertise from the community, you’ll start to get to know the people involved. I almost always have a 1-on-1 Zoom with potential speakers sometime before the event. Once you get to know the people in the community it becomes easier and easier to find future speakers and connect members with one another.

Want help launching or facilitating your Community of Practice? I help organizations build and run CoPs. You can learn more about my services by visiting my consulting page.

Written by cplysy · Categorized: freshspectrum

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 310
  • Go to Next Page »

Footer

Follow our Work

The easiest way to stay connected to our work is to join our newsletter. You’ll get updates on projects, learn about new events, and hear stories from those evaluators whom the field continues to actively exclude and erase.

Get Updates

Want to take further action or join a pod? Click here to learn more.

Copyright © 2026 · The May 13 Group · Log in

en English
af Afrikaanssq Shqipam አማርኛar العربيةhy Հայերենaz Azərbaycan dilieu Euskarabe Беларуская моваbn বাংলাbs Bosanskibg Българскиca Catalàceb Cebuanony Chichewazh-CN 简体中文zh-TW 繁體中文co Corsuhr Hrvatskics Čeština‎da Dansknl Nederlandsen Englisheo Esperantoet Eestitl Filipinofi Suomifr Françaisfy Fryskgl Galegoka ქართულიde Deutschel Ελληνικάgu ગુજરાતીht Kreyol ayisyenha Harshen Hausahaw Ōlelo Hawaiʻiiw עִבְרִיתhi हिन्दीhmn Hmonghu Magyaris Íslenskaig Igboid Bahasa Indonesiaga Gaeilgeit Italianoja 日本語jw Basa Jawakn ಕನ್ನಡkk Қазақ тіліkm ភាសាខ្មែរko 한국어ku كوردی‎ky Кыргызчаlo ພາສາລາວla Latinlv Latviešu valodalt Lietuvių kalbalb Lëtzebuergeschmk Македонски јазикmg Malagasyms Bahasa Melayuml മലയാളംmt Maltesemi Te Reo Māorimr मराठीmn Монголmy ဗမာစာne नेपालीno Norsk bokmålps پښتوfa فارسیpl Polskipt Portuguêspa ਪੰਜਾਬੀro Românăru Русскийsm Samoangd Gàidhligsr Српски језикst Sesothosn Shonasd سنڌيsi සිංහලsk Slovenčinasl Slovenščinaso Afsoomaalies Españolsu Basa Sundasw Kiswahilisv Svenskatg Тоҷикӣta தமிழ்te తెలుగుth ไทยtr Türkçeuk Українськаur اردوuz O‘zbekchavi Tiếng Việtcy Cymraegxh isiXhosayi יידישyo Yorùbázu Zulu