• Skip to main content
  • Skip to footer
  • Home

The May 13 Group

the next day for evaluation

  • Get Involved
  • Our Work
  • About Us
You are here: Home / Archives for cplysy

cplysy

Sep 20 2021

But really, how do I use the RE-AIM Framework?

 

Early in my career as a consulting evaluator, I landed a major contract. The contract was to evaluate a nationally-funded province-wide quality improvement program in health care. The funder specified that I evaluate using the RE-AIM framework. 

Enter: Googling about RE-AIM. 

I know we’ve all had to-do items either personally or professionally where we just want someone to “tell me what to do.” Sure, I can read all the peer-reviewed articles and evaluation textbooks, but this framework has been used in projects all over the world – hasn’t someone put together the “RE-AIM for Dummies” book? Surely someone somewhere can point me in the direction of the first steps and key lessons learned. If it existed at the time, I didn’t find it. 

Years later, I was working on an academic research team using Framework Analysis to analyze a huge set of qualitative data. Same scenario: someone please just tell me how to start and what to do! The difference this time was that someone had: I found it, it did exist, and it was awesome. Parkinson et al had published a detailed description of their use of Framework Analysis, complete with missteps, backtracking, and all (Parkinson, Eatough, Holmes, Stapley, Midgley, 2015). 

I love these experiential descriptions. I love reading about moving from knowledge gathering to action. I love sharing failures. What better way to learn? 

So, having used RE-AIM a handful of times on a few major initiatives, here is my account of how to use the RE-AIM framework in your evaluation planning, implementation, and reporting. 


What is RE-AIM?

In quick summary, for those less familiar, RE-AIM was originally developed to assess the public health impact of interventions, based on five domains: 

  • R – REACH 

  • E – EFFECTIVENESS 

  • A- ADOPTION 

  • I – IMPLEMENTATION 

  • M – MAINTENANCE 

RE-AIM provides great structure to an intervention that is well defined, but it may not pair well with something like Developmental Evaluation. It can certainly be used with a Utilization-Focused Evaluation approach.  


Step 1: Do some basic research.

I recommend the following: 

  • Where it all started, the first paper to describe RE-AIM. 

  • The academic version of this article, describing what it means to use RE-AIM.  

  • A more recent update, describing the evolution and application of RE-AIM in 2019. 

  • And, lucky readers, now there is a full website dedicated to RE-AIM, complete with a comprehensive list of resources. 

But I did say I was going to be more practical than just telling you to do all the academic background reading I was so desperately trying to avoid so many years ago, but truthfully, you can’t get around needing some background knowledge. My goal is to share with you a summary of the content on the RE-AIM website but also share some learnings from my own experiences.  

Step 2: Build (and implement) your evaluation plan.

The good news is that the RE-AIM framework gives you your key evaluation questions. Of course, you can (and should) supplement and add detail. 

So, where RE-AIM says, “Have I reached my target population?” you may adapt to “How many clients participated?” or “How many patients had access to the program?”  

Where RE-AIM says, “Was my intervention effective?” you may add in detail “Did my intervention improve patient-reported outcome scores?” or “Did my intervention improve [insert primary outcome measure]?” 

I like to structure the data matrix section of my evaluation plan right around the RE-AIM domains, like in the table below. 

One thing to note early on here, is that RE-AIM is a clever acronym ordered so that it can be read or pronounced easily, but, in my opinion, it’s a little misguiding, giving a sense of purpose in order where there is none. From my experience, it probably goes something more like ARIEM, a little less catchy. (Interestingly, in researching for this article I found a small hidden footnote on the RE-AIM website that admits this exactly!) 

RE-AIM can be effective in the actual planning of the intervention. As an evaluator, I always advocate for being part of the design team. With RE-AIM your goal is to get the team thinking about these five dimensions: 

  • How will they be recruiting (Reach/Adoption)?

  • Will it be representative (Reach/Adoption)?

  • What are their goals or outcome measures (Effectiveness)?

  • Do they have a clear plan of how they will achieve those goals (Implementation)? And so on. 

I think RE-AIM lends itself well to formative and summative evaluations. Several times I have drafted formative and summative evaluation plans for a single project by splitting up the RE-AIM acronym.

Adoption, Reach, and Implementation are those things in an intervention that can be course-corrected. Think of these as your process evaluation metrics. If you aren’t reaching your target population, don’t have organizational adoption, or aren’t implementing according to plan, you won’t be effective or maintain anything of worth. This is your formative evaluation. Then, Effectiveness and Maintenance can assess outcomes and sustainability as part of your summative evaluation. 

REACH:

Reach is likely just a count, but can be supplemented with qualitative data captures for a deeper understanding. 

e.g., We designed a training program aimed at Grade One teachers. Our city has 500 teachers and 286 participated in our training. Our reach was 286, or 57%. 

You could then go on to describe the demographics and how they differed (or not) from those who did not participate. Be sure to be clear about any inclusion/exclusion criteria! Reach is also where you can include questions that address access, equity, diversity, and inclusion: are participants representative of the population? Are we reaching those who would benefit most from the intervention? 

EFFECTIVENESS:

Think of this as a “traditional” evaluation – this is: Did the program work? and What difference did it make? This domain is where you will report outcome measures. Any number of methodologies would be appropriate here, depending on your specific intervention. Effectiveness may well be the bulkiest section of your evaluation plan. As in any evaluation, triangulation is a good idea to aim for. 

e.g., Our training program had a goal of training teachers to use a new method of teaching reading to Grade One students. Our effectiveness measures may include: # of trained teachers using the method (process measure or output) and % of students with improved reading skills (outcome measure), or the actual % improvement in reading score. 

Side note: RE-AIM is not mutually exclusive with other frameworks. I have often–in the evaluation of training programs–embedded the Kirkpatrick evaluation framework into the “E” and “I” of RE-AIM. The RE-AIM website actually recommends layering with PRISM. 

ADOPTION: 

It is easy to confuse reach and adoption. I struggled with this at first. For me, it helps to think of them as the same concept but at different setting levels: Reach is about individuals or participants, whereas Adoption is about groups or organizations. Adoption is asking: What organizational support do you have? So, similar to Reach, this is likely also a count. And, like Reach, you can supplement with additional data for a deeper understanding. 

e.g., How many schools supported teachers to participate in the training. How many schoolboards supported the schools to support the teachers? Our city has 300 schools; 120 supported teachers to participate: 40% adoption. Our analysis shows that there was an underrepresentation of rural schools and overrepresentation of inner-city schools. 

Like Reach, you could go on to describe characteristics of these organizations and how they supported the initiative. I have often used Adoption formatively to understand why these organizations endorsed the project or bought in. This exploration can help with spread and scale, or, if things aren’t going well, it is a great way to course correct. I have also included interviews or focus groups with the organizations that did not engage, to understand key barriers. 

IMPLEMENTATION:

Implementation is huge. There are whole fields about Implementation Science. In RE-AIM evaluation, you are primarily concerned with fidelity to the plan: Was the intervention implemented as intended? What adaptations were made? How consistent was implementation? Completion rates may also be an appropriate measure here. 

e.g., Interviews or surveys with the trainers identified barriers and enablers for successful training sessions. Interviews or surveys with the operational team identified barriers and enablers for recruitment, training the trainers, developing curriculum, building engagement and buy-in, etc.  

You could definitely layer on any number of implementation science frameworks here, but likely this is not the key area of interest for your stakeholders and doing so would make this beast unwieldy and hard to manage. Key tips here include considering how each level contributes to implementation: What did the adopted organizations do? What did your organization do? Don’t forget that your own design and operational team are key data sources too! 

Your “results” here are likely descriptions of barriers and enablers along with formative lessons learned and resultant adaptations. 

MAINTENANCE:

I’ll be honest: I have definitely turned in a final evaluation report before the program has reached a stage to be evaluated for maintenance. New initiatives tend to focus on implementation and first-round outcomes. I have, however, been fortunate enough that this hasn’t always been true. In one initiative, I used annual data reviews to look at maintenance of outcomes. In this particular initiative, we were happy to see maintenance, but we also learned that there was a significant plateau or ceiling effect of both outcomes and reach. This isn’t a huge surprise given what we know about the Diffusion of Innovation. As an evaluator, I could then facilitate discussions like: How will (or should) we attempt to reach those laggards? Will they take up 80% of the resources? 

In fact, this is an example of why applying a framework to your evaluation is helpful. If you build in a maintenance evaluation from the start, your team will know that this is planned and you will have the capacity to do the work when the time comes. 

So, if you are fortunate enough to be able to evaluate maintenance, it is likely a repetition of many of the measures that came before – you may take a look an ongoing reach and adoption: have you plateaued or continued to spread? You may look at effectiveness outcomes: have you sustained the gains you made? 

e.g., Annual check-in of reading scores in Grade One (and now Two) children. Updated participation counts to assess spread and scale. 

There is a handy Checklist for some key questions and considerations for each domain. It’s also worth noting that there is nothing holding you to evaluate all five dimensions. I say that begrudgingly though because RE-AIM was developed so that evaluators didn’t overlook key dimensions that are essential to program success. But sometimes there are valid reasons that one of these dimensions may not be relevant for your intervention.   

Step 3: Reporting

The RE-AIM website asks you to consider quantifying or scoring the five dimensions for a visual display:

I’ve never done this. This may be informative to you, as the evaluator, but I find that most stakeholders are less interested in the details of the evaluation framework you’ve applied and more interested in the “So what? Now what?” I certainly have used the RE-AIM structure to guide my reporting, but I don’t think it’s required. The key here is to know your audience – how aware are they of RE-AIM? If you were involved in the planning and they built key evaluation questions into the RE-AIM framework, using the five dimensions in your reporting may be appropriate, but in my experience, you can also draft a really great evaluation report that was based on RE-AIM without being tied to the domains as your section titles. Eval Academy has some great articles on how to draft that killer evaluation report.


Things have changed in the 10 years since I first used RE-AIM. There have been a lot more examples published and a lot more content is available. My goal today was to try to synthesize the key points in one place for you, and to share some lessons from my own experiences. I have found RE-AIM to be both highly structured, providing directed guidance, but also flexible enough to allow you to explore in greater depth the key areas of interest for your evaluation. 

So, hopefully, you aren’t as in the dark as I felt when I was first tasked with using RE-AIM. It’s one of many tools for evaluators to consider and one that I’ve had lots of success with! If you want to talk more about whether or how to use RE-AIM in your next evaluation project, consider booking some time with one of our evaluation coaches.  

Speak with an Evaluation Coach


Sign up for our newsletter

We’ll let you know about our new content, and curate the best new evaluation resources from around the web!


We respect your privacy.

Thank you!


 

Written by cplysy · Categorized: evalacademy

Sep 15 2021

Evaluador/a no hay camino, se hace camino al andar

Fuente

Quedarse en el camino significa salir de este campo de la evaluación.

En mi carrera profesional en evaluación, quizás como much@s otr@s evaluador@s que no se quedaron en el camino, me he movido en tierra de nadie, y me pregunto si no soy visto como “pragmático” para la academia y un “teórico” en la práctica. Me sigue haciendo gracia que los “profesionales” en el terreno (a l@s que llaman “practitioners”) te llamen “teórico” como algo despectivo. Con media sonrisa pienso: “no hay mejor práctica que una buena teoría”…pero tú que me llamas teórico, tú, de tu práctica no saco más teoría que la del caos…y claro ni se nos ocurra hablar a esos practitioners de pacotilla sobre la teoría de la complejidad: si lo hacemos seremos etiquetados “además” de rebuscados, embrollados y confusos…

Por suerte o desgracia he estado en diferentes entornos de trabajo siempre en relación a la evaluación (como investigador, gestor o evaluador externo): universidad, administración pública, organismos internacionales, organizaciones locales, consultoría independiente. Ese “caminante no hay camino, se hace camino al andar” (no sé bien si deseado o impuesto o ambos), me ha convertido en un nómada, ambulante, errante, itinerante, vagabundo…y eso me ha dado la posibilidad de reforzar mi independencia.

En aquel tiempo en el que empezamos, sin una carrera profesional, la senda para ser evaluador/a era más un ejercicio de azar y de tesón, o de ambos, que una apuesta segura y de calculado beneficio. Pero tanto ayer como hoy, con tesón y voluntad, aspirando a la calidad, al aprendizaje y a la mejora constante (y tratando de rodearte de otr@s colegas evaluador@s), se puede seguir en el camino de la evaluación. Lo que no está tan claro es si merece la pena marcarse un destino o lugar seguro o fijo, hay sitios donde llegar sigue dependiendo del azar, o de otras fuerzas parecidas…

Written by cplysy · Categorized: TripleAD

Sep 15 2021

Comentario en El arte o la industria de la evaluación por NATALIA LEÓN RAMOS

Muy bueno! es una catarsis de much@s de nosotros

Me gustaMe gusta

Written by cplysy · Categorized: TripleAD

Sep 15 2021

Find Excel Chart Formatting Annoying? Do this instead.

Ever get annoyed with formatting after you create a chart in Excel?

It’s pretty easy to create a chart in Excel. A couple of button clicks and poof, there you go, chart created. But if you want to create a chart that actually looks good, you often have to do a bit more work. For example…line graphs…scatter plots.

But sometimes, formatting in Excel just makes you want to…uh, how do I put this?

Throw your computer out the window.

freshspectrum cartoon by Chris Lysy.  "How long do you think it will take to write the report? 5 days?"
"To be safe. I'll have the report written in a few hours. I'll need the rest of the time to format the charts in Excel."

Lucky for us though, just because we created our chart in Excel, doesn’t mean we have to stay there to format the thing.

Believe it or not, Excel is a vector design tool. And even though it’s hard to take advantage of that while using Excel (stupid textboxes won’t go where I want them to go!!!!!) we can take control by sending our chart outside of Microsoft.

How to turn any Excel Chart into an SVG infographic.

Start with a Chart in Excel

So let’s start with a chart in Excel.

Just right click on the chart you want to format and click on “Save as Picture…”

Screenshot. Clicking on a chart in Excel and saving as a picture.

Export as Scalable Vector Graphics (A.K.A. an SVG)

Now when we save it we want to save it as a vector format (or else this won’t work). With my PC Office 365 version of Excel I’m given 6 options.

Of the 6, five are pixel based (aka raster)…PNG, JPG, GIF, TIF, BMP.

But number 6 is the trusty Scalable Vector Graphics format…SVG. This is our vector file format.

Screenshot, saving an Excel chart as an SVG.

Open your SVG in a graphics program.

Now that we’ve saved our SVG somewhere on our computer (or somewhere else) we need a program that works with SVGs.

Luckily there are bunch that fit the bill. Here are three big ones.

  • Adobe Illustrator (ye olde graphic design industry standard that requires a pro creative cloud account)
  • Figma (new fangled UI design tool that is both a pro tool and FREE to use)
  • Adobe XD (also a new fangled UI design tool that is both a pro tool and FREE to use by the same people who brought you ye olde graphic design industry standard Adobe Illustrator)

Any of these three will let you pick apart and redesign this chart. If you are not an Adobe CC person already, I would suggest starting with either Figma or Adobe XD.

Here is what the Excel Chart looks like in Adobe Illustrator

Screenshot of Adobe Illustrator with an SVG Excel chart.

See that over there on the right side? That’s a layers panel.

Every single element in an Excel chart can be isolated and changed through the SVG. You’re also going to find a bunch of empty rectangles that can be deleted away if they get in your way.

Here is what the Excel Chart looks like in Figma

Here we are in Figma with the layers on the left.

Screenshot of Figma with an SVG Excel chart.

Here is what the Excel Chart looks like in Adobe XD

And here we are in Adobe XD. Looks pretty similar to Figma, doesn’t it?

(See also sketchapp).

Screenshot of Adobe XD with an SVG Excel chart.

Pick it apart and rearrange to your heart’s content.

Ultimately the bar graph is just a single vector path buried within one of the groups. We can stretch it or recolor. As long as you keep the main chart pieces together, the graph will remain in the right proportion.

Screenshot of an infographic created from an Excel chart.

The best part about using a tool like this? We have total control over what goes where. And we can shift elements pixel by pixel.

Freshspectrum Infographic 
-Stop getting annoyed with formatting in Excel. 
-Just because you used Excel to create the charts doesn't mean you have to stay in Excel to format.
-Two charts, one generic Excel chart and the other an updated version created with Adobe XD
Started with a fake chart, ended with a random infographic.

Bonus. Want to save your new infographic as a PDF?

No problem.

Screenshot of Adobe XD, exporting an infographic as a PDF.

Written by cplysy · Categorized: freshspectrum

Sep 14 2021

El arte o la industria de la evaluación

hotbook-94

La evaluación ha sido y, en algunos rincones todavía es, un arte confeccionado por artesan@s, a veces considerados charlatanes, voceras, parlanchines, vende pociones, lenguaraces, bocazas pero, otras veces, sin embargo, apreciad@s y honrad@s como magos, brujos, hechiceros, prestidigitadores, nigromantes, médiums (objetivos evaluativos ocultos), faquires (sí también porque tragan con todo), taumaturgos, prestidigitadores (más de veinte preguntas), ilusionistas (especialmente los que dan alguna recomendación que ya no se sepa)

Y como con l@s mag@s, l@s había, y los hay, buen@s y no tan buen@s. Para bien y para mal este arte arcano poco a poco se va abriendo y extendiendo: es bueno que cada vez se evalúa más (aunque sea en número). A paso de “enefante” (palabra que viene de “mitad elefante mitad enano o enana”) se está asentando una carrera profesional. Se está creando una retórica de la evaluación, de lo que hay que hacer “porque está bien” (sí, todavía estamos explicando por qué hay que evaluar, en lugar de evaluar). Pero aunque falte todavía andar casi todo lo hablado (walk the talk), lo positivo es que estamos en camino. Y me pregunto si al mismo tiempo también corre el riesgo de transformarse en un proceso industrial, como el famoso marco lógico de corta y pega que tantos escribimos para los donantes en la soledad (solitario), en lugar de como procesos de aprendizaje y construcción participativa. El problema no son las herramientas, sino el uso que hacemos de ellas, su verdadero propósito. Y al cabo, como todo lo que se extiende y se populariza, se convierte en un negocio (más)…y con los negocios todo es posible (para bien y para mal)

Habiendo estado en países sin industria y sin acceso a bienes básicos, no digo que el desarrollo industrial sea malo, sino que este nuevo escenario tiene pros y contras. Quizás en el futuro el reto sea que el campo de la evaluación conserve el corazón (y el arte), ponga en valor la participación, al tiempo que vaya ganando en cobertura y sofisticación (e inflando más si cabe la burocrática retórica).

Written by cplysy · Categorized: TripleAD

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 162
  • Go to page 163
  • Go to page 164
  • Go to page 165
  • Go to page 166
  • Interim pages omitted …
  • Go to page 304
  • Go to Next Page »

Footer

Follow our Work

The easiest way to stay connected to our work is to join our newsletter. You’ll get updates on projects, learn about new events, and hear stories from those evaluators whom the field continues to actively exclude and erase.

Get Updates

Want to take further action or join a pod? Click here to learn more.

Copyright © 2026 · The May 13 Group · Log in

en English
af Afrikaanssq Shqipam አማርኛar العربيةhy Հայերենaz Azərbaycan dilieu Euskarabe Беларуская моваbn বাংলাbs Bosanskibg Българскиca Catalàceb Cebuanony Chichewazh-CN 简体中文zh-TW 繁體中文co Corsuhr Hrvatskics Čeština‎da Dansknl Nederlandsen Englisheo Esperantoet Eestitl Filipinofi Suomifr Françaisfy Fryskgl Galegoka ქართულიde Deutschel Ελληνικάgu ગુજરાતીht Kreyol ayisyenha Harshen Hausahaw Ōlelo Hawaiʻiiw עִבְרִיתhi हिन्दीhmn Hmonghu Magyaris Íslenskaig Igboid Bahasa Indonesiaga Gaeilgeit Italianoja 日本語jw Basa Jawakn ಕನ್ನಡkk Қазақ тіліkm ភាសាខ្មែរko 한국어ku كوردی‎ky Кыргызчаlo ພາສາລາວla Latinlv Latviešu valodalt Lietuvių kalbalb Lëtzebuergeschmk Македонски јазикmg Malagasyms Bahasa Melayuml മലയാളംmt Maltesemi Te Reo Māorimr मराठीmn Монголmy ဗမာစာne नेपालीno Norsk bokmålps پښتوfa فارسیpl Polskipt Portuguêspa ਪੰਜਾਬੀro Românăru Русскийsm Samoangd Gàidhligsr Српски језикst Sesothosn Shonasd سنڌيsi සිංහලsk Slovenčinasl Slovenščinaso Afsoomaalies Españolsu Basa Sundasw Kiswahilisv Svenskatg Тоҷикӣta தமிழ்te తెలుగుth ไทยtr Türkçeuk Українськаur اردوuz O‘zbekchavi Tiếng Việtcy Cymraegxh isiXhosayi יידישyo Yorùbázu Zulu