• Skip to main content
  • Skip to footer
  • Home

The May 13 Group

the next day for evaluation

  • Get Involved
  • Our Work
  • About Us
You are here: Home / Archives for cplysy

cplysy

Mar 13 2024

Evaluation versus Measurement

Today’s post started as a comic request and turned into a Q&A.

Here is the question that came to me from Randi Knox.

I’m looking for a comic to communicate the difference between program monitoring vs program evaluation. I didn’t see anything specific to this in your existing materials. I was wondering if you’d be open to making a comic for this purpose?

It’s certainly a topic that I haven’t fully delved into, but I did think of one comic from a couple of years ago.

But I think the question is a good one, and I wanted a little more inspiration. So I asked Randi a couple of follow-ups. Here is what she said.

I’m a relatively new internal evaluator in a department that recently rediscovered the joys of evaluation. I feel fortunate to work with a team of folks who are eager to evaluate, but I also get the sense that ‘evaluation’ is a loaded word for many team members. Some tend to call everything ‘evaluation’ and assume all data collection is for ‘evaluation,’ when this is not necessarily the case.

In considering how to create a shared understanding among team members, I thought it could be helpful to adopt the term monitoring as a less threatening, helpful, and natural part of program implementation and management. I also expect differentiating monitoring and evaluation could help decrease evaluation anxiety. So now I’m challenged to clarify what I mean by each of these terms.

Here are the comics the conversation inspired.

“Some tend to call everything ‘evaluation’ and assume all data collection is for ‘evaluation,’ when this is not necessarily the case.”

“I also expect differentiating monitoring and evaluation could help decrease evaluation anxiety.”

“So now I’m challenged to clarify what I mean by each of these terms.”

I did a bit of internet searching in the hope of finding a really good explanation of the differences. But what I found was all a little bit too jargony to be useful.

I focused on monitoring because I already have a fair number of comics designed to define evaluation. In these kinds of situations I usually fall back to metaphor. What could be fitting, or silly enough, to communicate the definition of measurement. You’ll find the following two as attempts to fit that description.

Here is a speedometer metaphor.

And this one is the silly one 🙂

Do you have a good way of describing the differences between monitoring and evaluation?

I don’t think I’ve cracked this one yet, so I would love to hear it. Let me know in the comments.

Randi Knox is a Supervisor of Research Evaluation & Program Management at Boys Town National Research Hospital in Omaha, Nebraska.  If you want to connect with Randi, you can find her on LinkedIn.

Written by cplysy · Categorized: freshspectrum

Mar 06 2024

Canva Templates – Inspired by the Nature Conservancy

Trying out a new series this week. The idea is simple, I find some inspiration and then use that inspiration to create a set of Canva report page templates.

If you keep your eyes open, design inspiration is literally everywhere. Mailers, posters, text books, magazines, websites, and social media are just a handful of potential inspiration sources.

This week’s inspiration comes from one of those little “magazines” that shows up in your mailbox after giving to a charity. This one is from the Nature Conservancy. Let’s dig in and see what we can find. You’ll find the Canva template link for all of these at the bottom of the post.

Sidebar with multi-photo spread.

This spread is something you see in magazines all the time. The written article is only given a sidebar, the rest of the page is made of up a photo grid.

Yes, words are important. But the more important the words, the less space you should give them on a page. Because it’s going to make it that much more likely someone will actually read those words.

Another cool thing about this kind of spread is that you can use anything in those visual spots. Want to share a collection of charts instead of photos, go for it.

The little infographics spread.

I saw this spread being used multiple times in the magazine. Basically, you have a big photo taking up the top half of the page. The bottom half right 2/3 is an article. The bottom half left 1/3 is a little miniature infographic.

This would be a great opportunity for a sound byte style data point or simple chart. The big picture at the top draws you in, the infographic delivers a point, and the article expands upon the point.

Big Q little a.

Q&A posts are standard fare for so many magazines.

This spread continues the trend of only using half the page for written content. The big Q at the top let’s you know it’s a Q&A post.

I think this kind of thing would work really well in many reports. Find a stakeholder (partner, participant, program staff member, etc.) and do a quick Q&A. Not only would this split up the narrative in a logical way, it’s also a nice way to incorporate other voices into your reporting.

People Stories.

When you can do it, having actual pictures of interviewees is a really nice way to bring a human element into your reporting. A lot of qualitative reporting tends to fall into summary style with lots of quotes and arranged by topic or code.

But simply giving space for each participant’s individual story is a super easy way to go. Then you can bring all the pictures together at the beginning of the story as a collage to set the scene.

Timeline graphic

I like how this timeline was laid out. It’s really simple, the step that is the focus of article gets white background with color objects in the foreground. Everything else gets muted backgrounds and sepia tones. The focus section in the example also gets a little bit more landscape than others, and since it’s evolution there is even a little overlap of pictures between phases.

There are all sorts of places for simple timelines in most reports. Especially when a project has particular phases.

In my template I decided to black and white the pics and make them a little transparent to pull in some background color blocks. I also removed the background on the phase I want to highlight. If this were a real report, you could have a page for each individual phase, doing the same treatment isolating each block.

Want the Canva templates?

I stopped worrying about whether I was using pro stock or not. I’m a pro user but you can make any template free by just replacing premium content with free elements or uploaded content. For most of the individual pages it will be an easy switch out.

Also, make sure you sign into your Canva account BEFORE clicking the template link.

Written by cplysy · Categorized: freshspectrum

Mar 06 2024

Ask Nicole: The Role of Social Workers in Reproductive Justice

Have a question you’d like to be featured? Let me know, March is Social Work Month, and the 2024 theme is “Empowering Social Workers!: Inspiring Action, Leading Change.”If you’ve been reading my blog for a while, you’ll know that my passion area is Reproductive Justice, and how the framework looks through a social work lens. […]

The post Ask Nicole: The Role of Social Workers in Reproductive Justice appeared first on Nicole Clark Consulting.

Written by cplysy · Categorized: nicoleclark

Mar 06 2024

Ask Nicole: Which Social Work Level Is Best for Practicing Reproductive Justice?

Have a question you’d like to be featured? Let me know, TBD Raise Your Voice: In the comments section below,

The post Ask Nicole: Which Social Work Level Is Best for Practicing Reproductive Justice? appeared first on Nicole Clark Consulting.

Written by cplysy · Categorized: nicoleclark

Mar 04 2024

Evaluating for Spread and Scale

This article is rated as:

These two words go together so nicely, “spread and scale”; they kind of roll off the tongue. I wonder how many of us would struggle to define or, perhaps more importantly, distinguish the two.

Earlier in my career, I was part of a team looking to publish an article about a quality improvement initiative. Throughout the article, I discussed measurement and evaluation for “spread and scale”. One of the journal reviewers challenged me, “Do you actually mean both spread and scale? How are you measuring for each of these?” It was then that I realized I hadn’t put much thought into what this almost-one-word phrase “spreadandscale” actually meant!

I think this is relevant to evaluation because many clients are interested in evaluating for spread OR scale, OR both. It’s important that we understand the differences and how to support evaluation to guide their decision-making and actions.

So, let’s start with definitions.


Spread is to replicate in a different location. Think horizontal flow. You may pilot a new program at one site and, upon its success, spread it to your other sites, but the implementation of that program is essentially the same.

Example: One ward in a hospital trials a new process for patient care. After its success, another ward in the hospital implements the new process. The process has spread to another department; it is being implemented in a new location, and likely with a new population, but the implementation is the same.


Scale is to build the infrastructure for implementation at a new higher level. Think vertical flow. If you want to implement a new process across an entire system, you would need to embed new policies, build training opportunities, set accountabilities, etc.

Example: A healthcare system wants to use a new record system; they need to determine what hardware and software procurement is required and how to train the staff (possibly a train the trainer model) and update all policies related to record keeping.


It’s easy to get confused. It’s possible for a program to employ both spread and scale.

Example: A healthcare system pilots a new system in one hospital. After its success they spread to all hospitals and develop accompanying policies and protocols to embed the new system across the entire health system, thereby scaling the pilot.

To further add to our confusion, spread and scale DO have similarities:

  • both are expansions

  • both often follow a pilot or trial

  • both are important for quality improvement efforts

  • both can be the reason for an evaluation!

The key, for me, is to look for that policy change or system-level change that is foundational to scale. Is the program intending to embed the program into its new way of working across all sites, or are they spreading to a few sites where they think this might also be a value-add?

So, what does all this mean for evaluation?


If you’re evaluating a program that intends to spread:

  • Focus on fidelity. Review the implementation plan and then evaluate what actually happened. Understanding the variance will help this program spread successfully. Questions to ask may include:

    • Was the program implemented as intended (Pro tip: the RE-AIM framework might come in handy here!)

    • What worked well, and what didn’t?

    • How did the context/environment play a role?

  • Identify what changes or adaptations were required for implementation. Understanding necessary changes develops a sort of pre-requisite list that can be used to determine if implementation in other sites or with other populations is likely to be successful. Implementation Science is likely your friend here, identifying key domains for implementation, including intervention characteristics, communication processes, readiness for change, planning and execution. Some questions to ask may include:

    • What staff/human resources are required?

    • How did responsibilities/accountabilities change?

    • What are the key barriers or enablers for implementation?

  • Of course, program effectiveness is still important. There’s a good chance an evaluation is being completed to determine if the program should spread. In that case, key outcome evaluation questions are very relevant:

    • To what extent is the program achieving what was intended?


If you’re evaluating a program that intends to scale:

  • Identify policy and system-level changes. Scaling a program requires that changes be embedded at the system level. These could be fiscal/financial enablers, network/relationship enablers, or environmental enablers. Identifying these changes, or at least the plan for these changes will help to determine if scale is likely to be effective.

  • Identify accountability structures and staff capacity. Scaling a program may require new roles, new training, new management structures or even entire new teams to oversee the program. Identifying these changes, or at least the plan for these changes will help to determine if scale is likely to be effective.

  • Effectiveness: is the program achieving what it intended? As with spread, determining program effectiveness is still a foundation for deciding whether the program should be scaled in the first place.


In my experience, most program evaluations or pilots are looking for spread; they want to make a small investment to test a new program or process with a smaller group to determine if it should eventually be spread to other sites or departments. An evaluation for spread may involve a formative assessment (how is the pilot implementation going?), an outcome evaluation of a pilot (what was achieved), and maybe even an evaluation of the spread itself. There are lots of opportunities for an evaluation!


Many validated tools can help you assess readiness for spread. It’s not a literature base I’m very familiar with. If you have a favourite, share it with me in the comments below!

 

Written by cplysy · Categorized: evalacademy

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 56
  • Go to page 57
  • Go to page 58
  • Go to page 59
  • Go to page 60
  • Interim pages omitted …
  • Go to page 304
  • Go to Next Page »

Footer

Follow our Work

The easiest way to stay connected to our work is to join our newsletter. You’ll get updates on projects, learn about new events, and hear stories from those evaluators whom the field continues to actively exclude and erase.

Get Updates

Want to take further action or join a pod? Click here to learn more.

Copyright © 2026 · The May 13 Group · Log in

en English
af Afrikaanssq Shqipam አማርኛar العربيةhy Հայերենaz Azərbaycan dilieu Euskarabe Беларуская моваbn বাংলাbs Bosanskibg Българскиca Catalàceb Cebuanony Chichewazh-CN 简体中文zh-TW 繁體中文co Corsuhr Hrvatskics Čeština‎da Dansknl Nederlandsen Englisheo Esperantoet Eestitl Filipinofi Suomifr Françaisfy Fryskgl Galegoka ქართულიde Deutschel Ελληνικάgu ગુજરાતીht Kreyol ayisyenha Harshen Hausahaw Ōlelo Hawaiʻiiw עִבְרִיתhi हिन्दीhmn Hmonghu Magyaris Íslenskaig Igboid Bahasa Indonesiaga Gaeilgeit Italianoja 日本語jw Basa Jawakn ಕನ್ನಡkk Қазақ тіліkm ភាសាខ្មែរko 한국어ku كوردی‎ky Кыргызчаlo ພາສາລາວla Latinlv Latviešu valodalt Lietuvių kalbalb Lëtzebuergeschmk Македонски јазикmg Malagasyms Bahasa Melayuml മലയാളംmt Maltesemi Te Reo Māorimr मराठीmn Монголmy ဗမာစာne नेपालीno Norsk bokmålps پښتوfa فارسیpl Polskipt Portuguêspa ਪੰਜਾਬੀro Românăru Русскийsm Samoangd Gàidhligsr Српски језикst Sesothosn Shonasd سنڌيsi සිංහලsk Slovenčinasl Slovenščinaso Afsoomaalies Españolsu Basa Sundasw Kiswahilisv Svenskatg Тоҷикӣta தமிழ்te తెలుగుth ไทยtr Türkçeuk Українськаur اردوuz O‘zbekchavi Tiếng Việtcy Cymraegxh isiXhosayi יידישyo Yorùbázu Zulu