• Skip to main content
  • Skip to footer
  • Home

The May 13 Group

the next day for evaluation

  • Get Involved
  • Our Work
  • About Us
You are here: Home / Archives for cplysy

cplysy

Mar 25 2024

The Frustration of Searching for Evaluation Content

 

 

If you are an evaluator, or someone interested in learning more about evaluation, you might have experienced the frustration of searching for evaluation-related content online. The word evaluation is used in so many different contexts and industries, that it can be hard to filter out the noise and find the information you’re looking for.

The term ‘evaluation’ is a common denominator in numerous fields. From education to healthcare, from technology to arts, ‘evaluation’ is a universal process of assessing, measuring, and judging. It’s a critical component of decision-making, improvement, and advancement in any industry. In education, we evaluate students’ performance. In healthcare, we evaluate patients’ health conditions. In technology, we evaluate software performance. In arts, we evaluate the aesthetic appeal of a piece. The list goes on. The omnipresence of ‘evaluation’ in our vocabulary is a testament to its importance, but it also creates a significant challenge when searching for specific ‘evaluation’ content online.

I recently started following #evaluation on some social media accounts. Here’s some of the content I get that’s not at all about the professional development or learning opportunities I’d hoped for:

 

 

The problem gets even worse if you try to do some job searching. Lots of people have “evaluation” of something in their job description. Case in point:

And if you happen to be an evaluation consultant looking for RFPs for evaluation contracts it can be nearly impossible. Nearly every RFP in every field makes mention of the “RFP evaluation process”, thus wiping out your keyword search term in one fell swoop!


How can we improve the searchability of evaluation content?

As evaluators, we can do our part to improve the searchability of evaluation content online. Here are some suggestions:

  • Use specific and descriptive keywords when creating or sharing evaluation content, such as “program evaluation”, or “outcome evaluation”  

  • Use hashtags, tags, or categories to label and organize evaluation content on social media platforms, blogs, or websites

  • Join and follow online communities, networks, or groups that are dedicated to evaluation  such as your local Evaluation Association

  • Subscribe to newsletters, podcasts, or blogs that feature evaluation content, (hint: have you signed up for our Newsletter? Scroll to the bottom of this page to sign-up!)

  • Attend webinars and workshops by evaluation experts

  • Share and recommend evaluation content that you find useful, interesting, or relevant with your colleagues, friends, or followers

  • Follow your favourite evaluators on social media!


I’ve spent a bit of time in my career explaining to clients that evaluation is not the same as research, and yet sometimes searching for “research” content may generate better results than “evaluation” content. It seems unfair!

As more evaluation associations offer credentialing and add to the professionalization of our world, it may become easier to find evaluation. More universities and colleges are offering programs directly in evaluation, which further adds credibility to the role and field. I heard somewhere that evaluation is the fastest growing field that no one has heard of. Perhaps as evaluation moves more into the spotlight, searching for content will be easier.


What is your solution for finding quality evaluation content? Let us know in the comments below!

Written by cplysy · Categorized: evalacademy

Mar 25 2024

Playing the Fool: Why Asking a Few Silly Questions Makes You a Better Evaluator

As evaluators, it’s our job to ask questions. When I tell people about what my company does, I tell them we help organizations that do good, to do better, by asking the right questions and answering them. Through asking and answering questions, we frame our evaluation projects, unearth data, and share helpful insights and recommendations.

Many of us arrive in evaluation because of our penchant for asking questions. And even though we’ve all been told “there are no dumb questions,” sometimes it feels like we really should know the answer already. It can be intimidating to speak up and admit your lack of knowledge or understanding.

But evaluators need to get comfortable asking questions, silly or not. Ever heard the term “playing the fool?” The idea behind that phrase is that if someone behaves in a silly way, or repeatedly asks silly questions, they’ll be seen as, well, silly. They won’t be taken seriously. What evaluator, whether an internal employee or contracted consultant, wants to look silly? Well, there are some important reasons to consider asking some silly questions at your next evaluation meeting.


Michael Quinn Patton has likened the role of the evaluator to that of the court jester, also called a fool. The jester’s role in English courts was to entertain, and they had the special privilege of being immune from punishment for what they said. Their unique role enabled them to question, to bear bad news, or to present new or unwelcome perspectives without fear of reprisal. Personally, I’ve latched on to this metaphor and love the idea that evaluators can occupy a special position, speaking truth to power without (too much) worry for their position.


Here are a few situations where thinking of yourself as a court jester, playing the fool, can help you to be a great evaluator.

1. When understanding how an initiative operates

As an outsider, you probably don’t have all the details of the initiative you’re evaluating. Even if you are an internal evaluator, there may be some aspects of a program that you’re not familiar with, or don’t make sense. By positioning yourself as a person in need of educating, you create an environment in which those important details can be shared. In a project kick-off meeting, I like to say very directly that I will probably ask some silly questions and ask the team to help me learn.

2. When certain questions aren’t being asked, but should be

In setting the context where I’m seen very clearly as a non-expert, I can also query the “why” behind puzzling aspects that other team members can’t safely ask about – but may very well also be questioning. After all, I, this silly outsider, shouldn’t understand why a process was set up in the way it was, or how a decision was made. But an internal team member who may also be wondering the same thing doesn’t necessarily have the same psychological safety to ask that very question. By playing the fool here, I can ask about that “elephant in the room” that nobody seems to be addressing.

3. When the group needs to know it’s safe to query

Not every workplace operates with psychological safety. In some settings, the organizational culture is such that people are afraid to fail, or they have a very realistic concern that their job would be at risk if they asked too many questions. An evaluation project requires trust and honesty; when those qualities are absent from an evaluation, the project risks being able to fulfill its purpose. As an evaluator, you can be very intentional about modelling question-asking and encourage the team you’re working with to also speak up when something doesn’t make sense.

4. When important but unwelcome insights need to be shared

We always hope that evaluation projects are driven by a true desire to learn and improve. That’s not always the case, though. As much as we prepare clients for the possibility of both positive and negative outcomes, and potentially scary recommendations, they’re not always ready to hear those findings. I note that with great empathy – as a business owner myself, I wouldn’t be super excited to hear that things aren’t going well and major adjustments are needed, either. It’s tough to hear that you may have been misdirecting your efforts. But in the best interests of all, those insights do need to be addressed. You can lead with your inner fool by being curious, vulnerable, and creating that safe space for receiving unwelcome news. A bit of well-placed humour can help to reduce defensiveness and bring a bit of joy to an otherwise dismal event.


Now don’t take this imagery too far! You don’t need to intentionally appear less intelligent than you are, and you don’t need to wear a goofy hat with bells on it. I’m not suggesting that you ask outright stupid questions (not that you would anyway). Nobody said that the court jester was dumb – far from it, that royal fool was full of crafty insights and clever language (just like you!). You know that your “silly questions” are actually a very intentional way to get people talking about the important things. As a skilled court jester, you’re carefully navigating humility and professionalism to create a safe environment for constructive conversation. And your evaluation project will be better for it.

Written by cplysy · Categorized: evalacademy

Mar 18 2024

My Cartoon Illustration Process – Realist Evaluation Comics

Back in 2017 I was commissioned by the RAMESES II project (funded by *NIHR) to draw a series of cartoons on realist evaluation.  They have been made available for royalty-free use at ramesesproject.org, along with a collection of other realist evaluation resources.

In this blog post I want to take you through my cartoon illustration process using this project as an example.  The cartoons were created through direct collaboration with the wonderful Joanne Greenhalgh and Ana Manzano of the University of Leeds.  The full RAMSES II project provided insights throughout the process.

*The RAMESES II project was funded by NIHR HS&DR 14/19/19. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.

Starting with the Illustration Challenge

A cartoon illustration project always starts with something to illustrate.  In this case it was a series of briefs on realist evaluation.

The challenge was to develop a series of cartoons inspired by the content within the briefs.  To get a sense of my process, let’s focus on one of those briefs.

A realist understanding of programme fidelity [PDF].

Finding cartoon inspiration and narrowing the scope.

In any substantive piece of literature, brief, or written report there is usually a ton that could inspire a cartoon.

The challenge is to figure out the important pieces and the ones that create the best cartoons.  Good illustration is about supporting the words, not replacing the words.

I find that the best cartoon illustrations usually come from the challenges, problems, and confusions.  My philosophy is to illustrate the problem, and the reader will read the text in search of the solution.

To find the key challenges I turn to the experts.  Through conversation I engage them in the process by finding where to focus.  Ultimately we narrow the scope.

Here is one of those areas of narrow focus.   This certainly helped to inspire the cartoons I have shared below.

Common confusion: illusion of control (that you can standardise programmes and control the context into which they are implemented either within an evaluation or the real world). RCTs are ardent believers in this illusion and think its possible to create at ‘closed system’ by controlling everything about context so that you can have ‘programme on’ compared to ‘programme off’ and everything else is the same except the existence of the programme. This idea has been transferred from drug trials to complex social programmes.

Sketching out the concepts.

Before diving into drawing the final cartoon, I start with a pen and my notebook.

With the low tech approach I can come up with more concepts than I could if started with my iPad.  And if a cartoon doesn’t work, or the client doesn’t like a certain cartoon, we can cut it here instead of putting more effort into the process.

Refining and completing.

After drawing the sketches and sharing with my client we have another conversation.  This time we talk about the concepts (what do they like? what don’t they like?).

At this point we tweak and cut.  And sometimes the sketches lead to more concepts.  Cartoon illustration is an iterative process, but hopefully with each revision we get a better product.

Here are a couple of the final fidelity cartoons.  What do you think?

A note from Joanne

We really enjoyed working with Chris. Not only was it fun, but it challenged us to think about what we really wanted to communicate about realist methods. The idea of illustrating ideas or issues people found confusing or difficult really resonated with us, as that had been one of the main reasons for embarking on the whole project in the first place. It felt like a genuine collaboration and I’m really pleased with what we have produced.

Here are a few more of my favorite comics from the collaboration.

What a realist reviewer looks like.

Realist recipe.

The protocol says we go this way.

Written by cplysy · Categorized: freshspectrum

Mar 13 2024

A Meta Reflection on Equitable Communications: Behind-the-Scenes of Creating the Equitable…

A Meta Reflection on Equitable Communications: Behind-the-Scenes of Creating the Equitable Communications Guide

After researching ways to share our findings and reports with equity in mind, we realized there wasn’t a go-to resource for equitable communications in the evaluation field. Together, we were inspired to develop the Equitable Communications Guide. This guide is a resource designed for evaluators in the social sector, which has relevant lessons for anyone looking to improve their communications! The guide explores how to communicate equitably, center the experiences of others, and convey the meaning behind key messages.

Equitable communications refers to using evaluation reports and messages to counter dominant narratives, embrace inclusivity, and center marginalized peoples.

We are not experts in equitable communications. But some of us know what it feels like for others to speak on our behalf and misrepresent our identities and experiences. Others of us have perpetrated the same violence against others and want to do better. We decided to draw on the work and guidance of thought leaders in other sectors, along with our own experience, to write the content found in this guide.

Creating this guide was a collaborative design process among our current and former team members. We didn’t just want a guide that showed people how to communicate equitably, we wanted a guide that could model what equitable communications can look like.

This blog post is an inside look at the inspiration behind developing the guide and everything that went into the process.

Early Stages

This project began the way many projects do: with a big vision, but uncertainty with how to create it. Our efforts began when our amazing intern Aranzazu Jorquiera Johnson conducted research on equitable communications and brought in lessons from her own knowledge around diversity, equity, and inclusion. But we did not have time to come together to create a shared vision for what we wanted the final guide to look like and how it could be useful for an evaluation audience.

When Shelli Golson-Mickens added her leadership to the project she began by centering the audience of the guide from the start, using human-centered design processes. She led us through a journey mapping exercise that considered the many people who could read and benefit from the guide. Using this persona profile template, we created personas, identified their needs, and created ideas for how to design a guide that met them.

On reflection, we would have liked to bring these personas into our process throughout the writing stage. While we struggled with time constraints, we found that the process of creating personas allowed us to envision and create a more inclusive guide and led us to a simplified and visual structure.

Process Stage

Using Aranzazu’s research and conducting some of our own, we started coding for themes we saw within resources related to equitable communications. These codes became the guideposts and strategies that are the heart of the guide.

Before we started writing, our teammate and evaluator/illustrator Kayla Boisvert helped us to envision an effective layout and design for the guide. Remembering our personas, we wanted something that would be digestible, where someone could open to the section relevant to them in the moment and get the information they need. We also knew this guide would be an aggregator of resources: not being experts ourselves, we were translating information for an evaluator audience and bringing lessons together in one place. There were wonderful deep dives into specific topics like equitable data visualization and language justice that we wanted people to find through our guide. Kayla mocked up some design options that captured the lessons we wanted to share and identified key resources for people to learn more, the design you see in our guide today.

Early sketch by Kayla Boisvert for the guide.

Once deciding on the layout, Shelli and co-author Alissa Marchant divided the writing by theme so that we could bring lessons together from across resources in several fields, including marketing, research, and advocacy. It felt like an iterative process because we found the strategies overlapped one another. We had many conversations to discuss ways to share pithy themes without repeating information. It was also hard to stay narrow: ultimately, this guide is about communicating data findings and not about how to maintain open and transparent communications throughout an evaluation (which is also important!). There was too much to say, but at the same time, we felt limited by the bounds we placed on ourselves. Ultimately, having a deadline — the Evaluation 23 conference where we were presenting our findings — obliged us to narrow our scope and workshop the language with feedback from our team, fellow evaluators who are a primary audience for the guide.

Publication

Living our values from the guide, we knew that putting our pens (and digital markers!) down was just the beginning.

Our first step was making the guide accessible to people with disabilities. As sighted people, we quickly realized how our visual approach to the guide (perfect for a sighted audience) was challenging for audiences who have limited eyesight. Kayla spent hours writing detailed alternative texts for each visual, and unfortunately we later learned that Canva (the design platform we used to design the guide) was poorly equipped to make the document ADA compliant. (Canva is improving. Shout out to Chris Lysy, who details its pitfalls and keeps us up with Canva’s capabilities in this helpful blog post.) We ultimately hired a freelancer who specializes in ADA compliance to help make a final PDF of the guide more accessible.

Since the guide was published, we have been sharing information from within the guide in various ways. We are not just relying on the written word! We also shared the guide at Evaluation 23 and a recent webinar (watch the recording here), with more workshops in the works. (If you’re reading this before March 15, please join us for Talking Data Equity!) We are grateful for the partnership of Elizabeth Grim, an independent consultant who writes about non-violent language, and Jonathan Schwabish, an author of the Do No Harm Guides, who co-presented with us. Sharing together has allowed us to continue to learn about equitable communications through our collaborations.

Takeaways

Writing this guide helped us to internalize some of the lessons within the guide in a new way. We felt a shift in our own perspective from what we should say to being curious about how other people perceive our communications and working to understand their cultural perspectives. Rather than a right or wrong answer, communications has become an opportunity to learn and better understand the people we are working with.

We understand that knowing how to communicate equitably is different from doing it well. We are just starting to practice the strategies we captured in the guide, and still learning how to communicate about the importance of equitable communications and advocate for more resources to communicate equitably in our client projects.

Although we sought to be as thorough as possible when writing this guide, we recognize that our use of language changes as society continues to evolve. And we know that what we created may miss something! We considered — and may still — develop a living document version of the guide where others can add their own insights as the world evolves and guidance changes. Until that time, please send us your thoughts and feedback on the guide in the comments here, or directly at info@innonet.org.

Thank you for learning alongside us! We look forward to your insights and continuing to learn together.


A Meta Reflection on Equitable Communications: Behind-the-Scenes of Creating the Equitable… was originally published in InnovationNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.

Written by cplysy · Categorized: innovationnet

Mar 13 2024

Evaluation versus Monitoring

Today’s post started as a comic request and turned into a Q&A.

Here is the question that came to me from Randi Knox.

I’m looking for a comic to communicate the difference between program monitoring vs program evaluation. I didn’t see anything specific to this in your existing materials. I was wondering if you’d be open to making a comic for this purpose?

It’s certainly a topic that I haven’t fully delved into, but I did think of one comic from a couple of years ago.

But I think the question is a good one, and I wanted a little more inspiration. So I asked Randi a couple of follow-ups. Here is what she said.

I’m a relatively new internal evaluator in a department that recently rediscovered the joys of evaluation. I feel fortunate to work with a team of folks who are eager to evaluate, but I also get the sense that ‘evaluation’ is a loaded word for many team members. Some tend to call everything ‘evaluation’ and assume all data collection is for ‘evaluation,’ when this is not necessarily the case.

In considering how to create a shared understanding among team members, I thought it could be helpful to adopt the term monitoring as a less threatening, helpful, and natural part of program implementation and management. I also expect differentiating monitoring and evaluation could help decrease evaluation anxiety. So now I’m challenged to clarify what I mean by each of these terms.

Here are the comics the conversation inspired.

“Some tend to call everything ‘evaluation’ and assume all data collection is for ‘evaluation,’ when this is not necessarily the case.”

“I also expect differentiating monitoring and evaluation could help decrease evaluation anxiety.”

“So now I’m challenged to clarify what I mean by each of these terms.”

I did a bit of internet searching in the hope of finding a really good explanation of the differences. But what I found was all a little bit too jargony to be useful.

I focused on monitoring because I already have a fair number of comics designed to define evaluation. In these kinds of situations I usually fall back to metaphor. What could be fitting, or silly enough, to communicate the definition of monitoring. You’ll find the following two as attempts to fit that description.

Here is a speedometer metaphor.

And this one is the silly one 🙂

Do you have a good way of describing the differences between monitoring and evaluation?

I don’t think I’ve cracked this one yet, so I would love to hear it. Let me know in the comments.

Randi Knox is a Supervisor of Research Evaluation & Program Management at Boys Town National Research Hospital in Omaha, Nebraska.  If you want to connect with Randi, you can find her on LinkedIn.

Written by cplysy · Categorized: freshspectrum

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 55
  • Go to page 56
  • Go to page 57
  • Go to page 58
  • Go to page 59
  • Interim pages omitted …
  • Go to page 304
  • Go to Next Page »

Footer

Follow our Work

The easiest way to stay connected to our work is to join our newsletter. You’ll get updates on projects, learn about new events, and hear stories from those evaluators whom the field continues to actively exclude and erase.

Get Updates

Want to take further action or join a pod? Click here to learn more.

Copyright © 2026 · The May 13 Group · Log in

en English
af Afrikaanssq Shqipam አማርኛar العربيةhy Հայերենaz Azərbaycan dilieu Euskarabe Беларуская моваbn বাংলাbs Bosanskibg Българскиca Catalàceb Cebuanony Chichewazh-CN 简体中文zh-TW 繁體中文co Corsuhr Hrvatskics Čeština‎da Dansknl Nederlandsen Englisheo Esperantoet Eestitl Filipinofi Suomifr Françaisfy Fryskgl Galegoka ქართულიde Deutschel Ελληνικάgu ગુજરાતીht Kreyol ayisyenha Harshen Hausahaw Ōlelo Hawaiʻiiw עִבְרִיתhi हिन्दीhmn Hmonghu Magyaris Íslenskaig Igboid Bahasa Indonesiaga Gaeilgeit Italianoja 日本語jw Basa Jawakn ಕನ್ನಡkk Қазақ тіліkm ភាសាខ្មែរko 한국어ku كوردی‎ky Кыргызчаlo ພາສາລາວla Latinlv Latviešu valodalt Lietuvių kalbalb Lëtzebuergeschmk Македонски јазикmg Malagasyms Bahasa Melayuml മലയാളംmt Maltesemi Te Reo Māorimr मराठीmn Монголmy ဗမာစာne नेपालीno Norsk bokmålps پښتوfa فارسیpl Polskipt Portuguêspa ਪੰਜਾਬੀro Românăru Русскийsm Samoangd Gàidhligsr Српски језикst Sesothosn Shonasd سنڌيsi සිංහලsk Slovenčinasl Slovenščinaso Afsoomaalies Españolsu Basa Sundasw Kiswahilisv Svenskatg Тоҷикӣta தமிழ்te తెలుగుth ไทยtr Türkçeuk Українськаur اردوuz O‘zbekchavi Tiếng Việtcy Cymraegxh isiXhosayi יידישyo Yorùbázu Zulu