• Skip to main content
  • Skip to footer
  • Home

The May 13 Group

the next day for evaluation

  • Get Involved
  • Our Work
  • About Us
You are here: Home / Archives for allblogs

allblogs

Apr 20 2022

Evaluation Comics

Big news! I have an evaluation comics page now.

So what does that actually mean?

Click here to visit the new page.

Over the last decade my comics used to live almost entirely within blog posts.

For most of my cartooning life I’ve treated my cartooning as a form of blog post illustration. My cartoons were illustrations of ideas that were found within my blog posts.

For years, just about every blog post would be published with a handful of cartoons. Those cartoons would then spread through social media.

And that used to work just fine.

Click on a cartoon, then you can just cycle through others using the arrow keys on your keyboard.

Why the change?

Because lately my professional life (and this blog) have focused more and more on information design. And my plans for the future involve lots of templates and tutorials.

Templates and tutorials are harder to illustrate with comics. And honestly, only cartooning what I blog limits what I cartoon.

So I’ve decided to give my cartooning a bit of space.

Will all the cartoons show up on the comics page?

Short answer, no.

I plan to post my cartoons first for my Patrons (you can always join us on Patreon). Some cartoons will stay as Patreon exclusives, but most will go to the comics page.

AS for the archives, right now I just have 2022 in there. I plan to back publish my archives. But since I’ve drawn hundreds of cartoons it may take a little while.

Until then, the best way to see all of my cartoons is by becoming a Patron where you’ll get access to my private Dropbox folder.

Will this mean more cartoons?

Yes.

I’ve changed my process, and it’s re-opened the cartoon floodgates. So be prepared for lots more cartoons in the future (even if you choose not to join us in the Patreon community and just stay a public fan).

How do I get there directly?

You can click the menu link on the freshspectrum homepage.

OR, just type evaluationcomics.com into your browser.

Hope you enjoy!

Written by cplysy · Categorized: freshspectrum

Apr 18 2022

10 Subtle Signs of “Death by PowerPoint”

Death by PowerPoint makes our audience scroll through their phone or lose interest. Important information sits on the slide, gathering dust.

We’re all familiar with the obvious signs of Death by PowerPoint:

  • Text-wall slides with bullet points for daaaays.
  • Using filler words (um, like, so).
  • Multiple graphs smushed on one slide with tiny text.
  • Tiny, grainy images with cheesy stock photography models.

But are you familiar with the subtle signs of Death by PowerPoint?

10 Subtle Signs of Death by PowerPoint

Here are 10 subtle signs that our presentations might need some TLC:

Spending Too Much or Too Little Time Making Slides

Are you making the slides the night before? That’s not enough time.

I’m guilty. I used to make slides on the plane on the way to the conference.

Or, maybe you’re running into the opposite challenge: Are you spending too much time making slides?

Again, I’m guilty. Sometimes I’d waste an entire weekend making slides for a big talk.

If your time management falls on either extreme—you’re spending too little time or too much time making the slides—then that’s a subtle sign of Death by PowerPoint.

Poor time management à poor presentation quality à adverse effects on our audience.

Running Out of Time to Make Handouts

In a perfect world, every presentation would be accompanied by a separate handout, one-pager, or even technical report.

In reality, I used to just share a PDF’d copy of my slides. I wasn’t sure what to include in the handout, how to format it, or whether it was really necessary (it is!!!).

As a result, my slides were in this weird limbo: I’d strive for great-looking presentation slides with lots of images and very little text. But sometimes I’d get nervous and add too much text so that the slides could do double-duty as a handout.

When our PowerPoint has to serve two purposes, as the presentation slides and as the handout, the presentation suffers.

Losing Time on Colors and Fonts

Have you ever used one of PowerPoint’s ugly and overused templates?

I have. A dozen times.

Have you lost time searching through folders and subfolders for your organization’s templates?

Have you lost time making your own templates?

Have you lost time thinking about colors?

Fonts?

Photos?

Sure, it takes time to create an initial corporate template with Theme Colors and Theme Fonts.

But these Word Hard Once techniques save a ton of time in the long run.

Losing time before each presentation on colors and fonts is a sure sign of Death by PowerPoint. When our minds are distracted by the minutiae of slide design, our audience feels the effects.

Including Lots of Graphs

I bet you’re already trying to avoid text-heavy slides.

But are you actively trying to avoid graph-heavy slides?

It sounds counterintuitive, I know. I teach dataviz for a living, after all. 😊

Our audience needs a variety of visuals. Not just graphs. They need graphs, tables, diagrams, photos, maps, logos, quotes, stories, and more.

Graph-heavy presentations are a subtle sign of Death by PowerPoint.

Having Trouble Editing Graphs

Is it tricky to update your graphs, tables, and diagrams once they’re inside PowerPoint?

Maybe you need to:

  • Link your Excel spreadsheet to your PowerPoint slide (so that changes in Excel are reflected in PowerPoint);
  • Make your fonts bigger or smaller in PowerPoint; or
  • Adjust the graph colors;
  • Change the chart type (e.g., from a vertical column chart to a horizontal bar chart); etc.

There are plenty of behind-the-scenes techniques to make editing easier.

Editing the long, hard way is a sign of Death by PowerPoint.

Word-Vomiting Our Presentation’s “So What?”

Can you step back and write a “takeaway tweet” for your presentation?

It doesn’t matter if you’re prepping for a 5-minute update in your staff meeting, or a multi-day workshop, or a keynote speech.

You know the quote: “If you can’t explain it simply, you don’t understand it well enough.”

If it takes us sentences and sentences and sentences to get to the point, then our presentation is probably causing Death by PowerPoint.

Feeling Weird on the Webcam

Pre-pandemic, were you comfortable speaking in-person?

Nowadays, in the world of daily Zooms, do you feel awkward on the webcam?

Maybe you’re:

  • hesitating to turn on your webcam at all?
  • wondering if you look okay?
  • staring at your own face when you speak?

A lot of us feel weird on the webcam. It’s so different from being in-person.

But it doesn’t have to stay awkward.

There are plenty of rules of thumb: where to look, and for how long; which tech to use to give you a boost; and how to overcome your own self-consciousness about your appearance.

Including Lots of Details–Just in Case

Just in case someone wants to review the slides ahead of time.

Just in case someone asks a tough question.

Adding “just in case” information dilutes the power of our presentation, which is a sign of Death by PowerPoint.

Have you deleted your “just in case” graphs, tables, and diagrams?

If not, it’s Death by PowerPoint.

Guessing How to Use Our Hands

Years ago, when I first started studying public speaking, I read that I was supposed to “use my hands.”

The statistics were promising: We’re rated as being more trustworthy when the audience can see our hands.

Think of the biological roots from cavepeople days. When others can see our hands, they know we’re not holding a weapon. We’re approachable. We’re safe. We’re trustworthy.

At first, I wasn’t sure what to do:

  • Hands placed firmly on the podium (for big conference talks)?
  • Hands in front pockets (for smaller meetings)?
  • Hands holding a notebook or tablet with notes?
  • Hands holding a laser pointer?

I lost so much time guessing.

Can you explicitly name 5+ ways your hand motions can help your audience understand patterns in the data?

If not, it’s Death by PowerPoint.

Not Prioritizing Accessibility

“Little a” accessibility is making sure our graphs are easy for others to understand.

“Big A” Accessibility is making sure our graphs follow specific Federal government standards so that the can be read, viewed, and understood by people with disabilities.

Both types should be priorities in every presentation.

Can you name 5+ specific edits you’ve made to increase your presentation’s accessibility and Accessibility?

If not, it’s Death by PowerPoint.

Your Turn

How many subtle signs are you guilty of? 1, 2, 5, 10?

What about the staff members that you supervise?

Written by cplysy · Categorized: depictdatastudio

Apr 15 2022

Webinar Notes: Using Evaluation in Context: Multicultural Validity and Cultural Competence in Evaluation

Date: 14 April 2022

Hosted by: AEA, Government Accountability Office

Speakers: Karen Kirkhart, Kathryn Newcomer, Giovanni Dazzo, Nicole Bowman, Terell Lasane (moderator)

This was a really great webinar. I furiously took notes of as many of the insightful things the panelists were talking about. The notes are imperfect (I tried to catch some direct quotes inside quotation marks, but some of this is paraphrased – any errors are my own! If you are really interested in this topic, you can check out a recording of the webinar here and there are a bunch of resources that the speakers shared at the end of this posting.

Karen Kirkhart:

  • multicultural validity is a “call to broaden the kinds of evidence that are considered in validity conversations”
  • limited views of “validity” promotes social injustice – it silences
  • 5 intersections sources of intersecting validity evidence
    1. methodological validity – the stuff we usually think of re: quant and qual (insufficient as the only source of evidence)
    2. theoretical evidence – “insights from social sciences and humanities and professions”, Indigenous wisdom; program theories (examine these for bias towards deficits and disadvantage)
    3. relational evidence – “how people relate to one another, to our planet, to the universe”; “how power is exercised in relationships”; collaborative and participatory approaches ‘ position relationships as positive, but this is not always true. e.g., “inclusion” can “twist” into a settler invitation to assimilate
    4. experiential evidence grounds our understanding in the lives of the community members; “calls evaluators to spend time with communities, upon being invited”
    5. consequential evidence – brings accountability to our work; examine what happens or fails to happen as a result” if “evaluation does not move the needle towards social justice, what does that tell us about our accuracy and adequacy of our prior understandings?”

Kathryn Newcomer:

  • multicultural validity is a lens through which we should view our claims (e.g., claims of
  • “evidence-based policy making” has been embraced in OECD trails and a focus on RCTs as being the way to demonstrate evidence
  • likes the term “impactees” rather than “beneficiaries” (because you don’t know if they are benefiting!)
  • concerned with standards used in various registries to judge research
  • working on an advanced set of evidence standards – broadening view of causation, context, equity
  • fit methods to the questions
  • 3 books influential in her thinking of cultural humility and in understanding racism, sexism, and classism (Stamped from the Beginning by Ibram X. Kendi, White Trash: 400 Years of Untold History of Class in America by Nancy Isenberg; Invisible Women: Data Bias in a World Designed for Men by Caroline Criado Perez)

Giovanni Dazzo

  • evaluations often based on the opinions of what is “rigourous” according to the funder and the evaluator, but not necessarily on the people who the program is supposed to serve
  • we as evaluators often term sticky note activities as “participatory”, but is that what the community consider to be the ways they participate
  • if we enact oppressive ways of “participating”, we are robbing people of their identities
  • how can our practices restore our humanity as evaluators?
  • “an expertise that privileges distance (another word for “objectivity”)”
  • co-constructed a reflective framework
  • “the extractive nature of inquiry” vs. a way to restore
  • “restorative validity”
  • seek to heal and restore rather than to “prove a point”

Nicole Bowman

  • storytelling as valid and impactful
  • scientific and policy and academic humility to add to the idea of cultural humility
  • we must understanding history in our context to walk together in a good way
  • our experiences matter, how we got here today
  • Braiding Sweetgrass by Robin Wall Kimmerer
    • braiding requires tension – “tension is respected and expected”
    • the more tension in the knot, the stronger it is – intersectionalities
  • think of the 575 tribes as a nation state
  • refers to self as a “blue collar scholar”
  • “we have to get upriver”
  • not much has changed despite many years of reports, etc.
  • lets bring some wisdom into that work, instead of just “evidence”
  • learn about sovereign nations
  • build capacity, competency, and skills on how to work nation-to-nation
  • how do you make RFP policies so that we can build things differently and start piloting and testing things to look for better outcomes
  • who owns the data – how we publish
  • if we are trying to learn how to do things differently together, we need to dedicate more time and resources to do that
  • think about who is here and who is not here

Q&A

  • KK:
    • There is not one “evaluation community” – only a small proportion of those are members of evolution associations
    • much evaluation is done by outside contractors
    • social impact investing” not part of evaluation community, do a lot of evaluation work
    • lots of people have not had training in evaluation, let alone training in culturally responsive evaluation, cultural humility
    • some foundations (like Kellogg) and organizations like CREA that have been doing this stuff for a long time
    • the Urban Institute – lots of free materials you can download
    • cultural humility is so important – you can never fully understand another community/culture, you don’t just do a training on cultural humility/responsiveness and say you are done
  • KK
    • cultural competence is a stance – it’s infused across the AEA competencies, not a single “competency”
    • cultural competence implies an “end point” – that term may have outliving its usefulness
  • NB
    • legal political aspects – Tribal Nations are the only groups within “cultural responsive” that have this status
  • KK
    • there’s been work on cultural responsive evaluation for a long time (e.g., growth of TIGs, diversity work in AEA)
    • intersectionality theory has had a huge impact – “it messes everything up. which is a good thing”
    • things that disrupt and shakes us up is a
    • within society at large, the pandemic has raised awareness of inequities and the anger and outrage of the murder of Black citizens
    • and recognition that historic “solutions” have not been working
  • TL
    • if you codify things into law, it changes society
    • The “evidence act”
    • current administration released something talking about the importance of Indigenous wisdom
  • NB
  • younger generations do not see disciplinary and other lanes, “everything is related”; they don’t see boundaries they see opportunities, “putting together this beautiful quilt”
  • e.g., government TIG reaches out to Indigenous TIG all the time
  • we need to braid this together
  • TL:
    • I teach and our discussions show that students are thinking critically about how evaluations have not met the mandate because they are not considering cultural
  • KN:
    • qualitative and mixed methods are more and more becoming the body of research and evaluation, we may have reached a tipping point
    • many of the standards of evidence are “canonized” with positivist notions of “validity”, but more qualitative researchers are coming to the fore to challenge this; KN’s new standards are in a manuscript she’s
  • GD:
    • “we are more concerned with being ‘scientific enough’ than te are about being relevant”
    • demonstrated to where the money flows – to quant research – so those researchers hold more power and control
    • in participatory, community-based, it is assumed that “participation is good” , but as KK mentioned, it’s not always so
    • we have to ask why people are being asked to participate, are they being compensated? do they have time? often funders give excuses as to why “we can’t pay individuals”
    • processes often silence minoritized or under-resources communities
    • people often showcase the “participatory method” as the end goal, as opposed to how the method promotes mutual understanding, without that we don’t get to relational evidence, liberation
  • NB
    • there’s an Indigenous data sovereignty network
    • they are publishing in the data science literature too
    • data = power
    • “I need courageous, compassionate, and curious people”
    • mostly white males and females fill these positions that have the power and priveldge
    • we have to talk about power and privilege and capitalism, uncomfortable things
    • red, white, yellow, black are all the colours on our medicine wheel, all working together
    • we have no business making policy on things we know nothing about
    • we are all learning different things – e.g., “I don’t have experience in LGBT+, but have been invited into the work because I know Native stuff and they know that I will come in a humble way”
    • you can learn about communities based on what they are posting in social media
    • we learn, unlearn, and relearn together
  • GD
    • we have to think of where the money is going
    • evaluation work is contract based
    • we’ve broadened our thinking about how we do evaluation funding. Learning about how communities do things rather than funding projects for evaluators to go in and say “tell us everything you know”
  • KN
    • book on inclusive engagement
    • we think “engagement” is saying “we are having a meeting on Wed at 7 pm so we can tell you what we are going to do to you” – that’s not engagement
    • what are communities getting from this?
    • need to think of inclusion at design stage, not just at evaluator
    • evaluators come into projects too late to do a lot of this work sometimes
    • the term “rigour” is interesting – has a specific ontological assumption that there is a truth that evaluations have to find; probably not how to think about it. tends to compete with ideas of multicultural sensitivities. A very rigid view of rigour
  • KK
    • rigiour is often invoked against multicultural sensitivities. – my answer is that nothing is more rigiour than triangulating multiple sources of data
  • NB
    • if your “rigour” is working why are Native people still experiencing such high levels of ditabets, suicide, lower rates of graduation at high school, universities – your rigour is not working, since we are not getting the outcomes
  • “lets go beyond “do not harm” and be a good relative”
  • TL:
    • when you get a “significant” result in an evaluation saying there’s a , people often don’t ask “does it work well for everyone? does it work well in different contexts”
  • DG:
    • there are courses on decolonizing methodologies
    • where is the money going?

Resources

There were a tonne of resources suggested during the workshop. Here are some that I’m planning on checking out:

  • Center for Culturally Responsive Evaluation and Assessment
  • Braiding Sweetgrass by Robin Wall Kimmerer
  • Red Earth, White Lies by Vine Deloria, Jr
  • White Privilege and the Decolonization Work Needed in Evaluation to Support Indigenous Sovereignty and Self-Determination by Kate McKegg
  • Bowman, N.R. (2020). Nation-to-Nation in Evaluation: Utilizing an Indigenous Evaluation Model to Frame Systems and Government Evaluation: https://onlinelibrary.wiley.com/doi/abs/10.1002/ev.20411 
  • Bowman, N.R. (2018).  Looking Backward but Moving Forward: Honoring the Sacred and Asserting the Sovereign in Indigenous Evaluation: https://journals.sagepub.com/doi/abs/10.1177/1098214018790412. 
  • Russo Carroll, etal. (2020). The CARE Principles for Indigenous Data Governance: https://datascience.codata.org/articles/10.5334/dsj-2020-043/. 
  • SAMHSA: Native American Center for Excellence (n.d.): Steps for Conducting Research and Evaluation in Native Communities: https://www.samhsa.gov/sites/default/files/nace-steps-conducting-research-evaluation-native-communities.pdf
  • Mariella, Brown, and Carter (2009). Tribally Driven Participatory Research: https://digitalscholarship.unlv.edu/cgi/viewcontent.cgi?article=1058&context=jhdrp
  • Indigenous Research Methodologies by Bagele Chilisa
  • Johnston-Goodstar, K. (2012). Decolonizing evaluation: The necessity of evaluation advisory groups in Indigenous evaluation. New Directions for Evaluation, 136, 109-117.
  • Smith, L. T. (2012). Decolonizing methodologies: Research and Indigenous peoples (2nd ed.). New York: Zed Books.
  • Dean-Coffey, J., Casey, J., & Caldwell, L. D. (2014). Raising the bar – integrating cultural competence and cquity: Equitable evaluation. The Foundation Review, 6(2). https://doi.org/10.9707/1944-5660.1203
  • Kirkhart, K. E. (2016). Equity, privilege and validity: Traveling companions or strange bedfellows? In S. I. Donaldson & R. Picciotto (Eds.), Evaluation for an equitable society (pp. 109-131). Greenwich, CT: Information Age.
  • LaFrance, J., Nichols, R., & Kirkhart, K. E. (2012). Culture writes the script: On the centrality of context in Indigenous evaluation. In D. J. Rog, J. Fitzpatrick, & R. F. Conner (Eds.), Context: A framework for its influence on evaluation practice. New Directions for Evaluation, 135, 59-74.
  • Kirkhart, K. E. (2010). Eyes on the prize: Multicultural validity and evaluation theory. American Journal of Evaluation, 31(3), 400-413.
  • Johnson, E. C., Kirkhart, K. E., Madison, A. M., Noley, G. B.,  & Solano-Flores, G. (2008). The impact of narrow views of scientific rigor on evaluation practices for underrepresented groups. In N. L. Smith & P. Brandon (Eds.) Fundamental issues in evaluation. (pp. 197-218). New York: Guilford.
  • Stamped from the Beginning by Ibram X. Kendi
  • White Trash: 400 Years of Untold History of Class in America by Nancy Isenberg
  • Invisible Women: Data Bias in a World Designed for Men by Caroline Criado Perez

Written by cplysy · Categorized: drbethsnow

Apr 13 2022

Better Report Design or Faster Report Design? Pen and Paper Activity

So what’s better, creating one great report or ten good reports?

In an ideal world we would have ample time to plan, write, develop, illustrate, test, and iterate our evaluation reports. But that’s likely not the professional world you occupy (and it’s certainly not the world I occupy).

Freshspectrum cartoon by Chris Lysy. "We have an award winning internal design team. I suggest getting in touch no less than 3 years before the report is due."

I decided to try out a little activity at the beginning of my workshop session last week. I drew a simple x and y axis. On one axis I wrote “better design” and on the other axis I wrote, “faster design.” Then I asked everyone to find where they were on the grid now, and where they hope to be later.

What are your priorities graph.  Better design by faster design.

The “short activity” ended up taking most of our session. Because the answers were fascinating.

I think in the field of data visualization and report design the general argument you’ll find is, “good design takes time.” So if you want to prioritize good design, you need to budget more time. But what if you don’t have more time, and you’re not likely to get more time for your next report either?

Wouldn’t it be a good idea to learn how to create better reports in less time?

What are your priorities graph.  Better design by faster design.
Slower Design & Better Design (Great report but takes a lot of time to produce)
Slower Design & Worse Design (Ugly/boring and takes a lot of time to produce)
Better Design & Faster Design (Great report quickly produced)
Worse Design & Faster Design (Ugly/boring but quickly produced.

So I want you to try this little activity. On a sheet of paper draw an x and a y axis. On the Y, write “Better Design” and on the X, write “Faster Design.”

Next, draw two points.

The first point is where you think you are right now. Are you a good designer, fast designer, both, neither? The scale is completely yours to decide.

For the second point, draw where you would like to be in the future. As you improve your design skills, do also see yourself committing more time to reporting? If so, your overall design speed might go down.

What are your priorities graph.  Better design by faster design. Marked with "Where you are now" and "design goals."

So why ask the question?

Because better design isn’t always slower design. Sometimes being the better designer is being able to create more reports and reach more audiences in a shorter time period.

Spending a lot of time on your reports is a design choice. Deciding between one great report and ten good reports, that’s a choice you make.

How do you design better?

Short answer, practice and support.

It might not be the answer you were hoping to find, but becoming a better designer just takes a lot of practice. The more practice, the better.

And if you are an organization hoping to grow your team’s data design skills, provide them with support. Give your team opportunities to try new things and get more practice. And seek to eliminate any bureaucratic procedures that may be inhibiting your team’s creativity.

How do you design faster?

Short answer, assets and process.

I have subscriptions Adobe CC and Office 365. I have designed high quality reports and infographics using Adobe Illustrator, InDesign, XD, Sketch, and PowerPoint. But most of the time I choose to use Canva.

It’s not that Canva is a superior graphic design tool. But the access to a huge library of assets (stock photos, icons, templates) speed up my ability to create better design, faster. And often, much faster.

If you want to design faster, surround yourself with assets and develop a streamlined creative process.

And if you are an organization hoping to speed up your team’s data design production, give them assets. Easy to use templates, lots of stock photos, and pre-branded icon libraries. And give them the training and support they need to use the tools.

Written by cplysy · Categorized: freshspectrum

Apr 13 2022

Criteria Based Ranking in Developmental Evaluation

Developmental Evaluation is widely implemented and the preferred option for programs that address complex problems such as poverty and homelessness. There is a growing body of literature on Developmental Evaluation (DE) and more and more evaluators are embracing this approach (myself included!).  

For the past six months, my colleagues and I have been involved in DE. We have helped a client make important decisions using multiple evaluation tools, including surveys, document reviews, and Criteria Based Ranking (CBR).

In this article, I will explain what Criteria Based Ranking is and how we used it in Developmental Evaluation.  


What is Criteria Based Ranking?

CBR is a much simpler form of Multiple-Criteria Decision Approach, which comes from operational research, a discipline that deals with the development and application of advanced analytical methods to improve decision-making.   

Both CBR and Multiple-Criteria Decision Approach evaluate multiple and often conflicting options, such as cost versus quality. For example, in a publicly funded healthcare system, when comparing the benefits of a new drug to the status quo, decision makers need to weigh the health benefits and economic impact of both options. It is difficult to compare cost versus effectiveness directly. CBR allows us to have a final numerical number (rank), while accounting for both criteria.    

For the DE project, we are supporting a team of community leaders that aim to improve services for seniors within the city. The decision they were confronted with was: out of the many problems and challenges seniors face, which ones should the project prioritize and use in their engagement strategy? To address this question, we first completed a document and literature review and identified 20 priority areas.  

Next, we helped them further narrow down the priority areas using a simple form of CBR. 


How to use Criteria Based Ranking in Developmental Evaluation

In CBR, the first step is determining relevant criteria you would like to use. In the healthcare example above, impact on health status (i.e., survival rate and quality of life) and on health care cost (i.e., cost of drug and estimated cost-savings of future healthcare cost) can be used to compare the new and the status quo drugs, and arrive at a final score.

Next comes assigning a value for each criterion. For our DE project, the criteria the stakeholders picked were equity, feasibility, urgency, and potential for joint action and they decided that each of the criteria were equally important so were given the same weight. However, depending on the priority of the DE project, higher value can be assigned to some criteria if they are determined to be of higher importance relative to the other criteria.  

The final step in CBR is assigning numerical values to determine a rank. At this stage participants individually rate each of priority areas using the different criteria. 

For our DE project, using an online survey platform, we asked the stakeholders to rate the 20 priority areas from low to high using the four criteria shown below.  

We completed the analysis, and the results were surprising. Out of the 20 priority areas, the ones that stakeholders felt were sure to top the list were ranked lower and vice-versa. These results show that topic areas that often grab attention are not necessarily the ones that will be prioritized when using common principles (criteria).


Criteria Based Ranking is one tool evaluators can use to facilitate critical thinking and some level of precision in decision making, in Developmental Evaluation and other types of evaluations.

Check our other articles on Developmental Evaluation here, and here. If you have used CBR in your evaluation work, tell us about it in the comments.


Sign up for our newsletter

We’ll let you know about our new content, and curate the best new evaluation resources from around the web!


We respect your privacy.

Thank you!

 

Written by cplysy · Categorized: evalacademy

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 131
  • Go to page 132
  • Go to page 133
  • Go to page 134
  • Go to page 135
  • Interim pages omitted …
  • Go to page 310
  • Go to Next Page »

Footer

Follow our Work

The easiest way to stay connected to our work is to join our newsletter. You’ll get updates on projects, learn about new events, and hear stories from those evaluators whom the field continues to actively exclude and erase.

Get Updates

Want to take further action or join a pod? Click here to learn more.

Copyright © 2026 · The May 13 Group · Log in

en English
af Afrikaanssq Shqipam አማርኛar العربيةhy Հայերենaz Azərbaycan dilieu Euskarabe Беларуская моваbn বাংলাbs Bosanskibg Българскиca Catalàceb Cebuanony Chichewazh-CN 简体中文zh-TW 繁體中文co Corsuhr Hrvatskics Čeština‎da Dansknl Nederlandsen Englisheo Esperantoet Eestitl Filipinofi Suomifr Françaisfy Fryskgl Galegoka ქართულიde Deutschel Ελληνικάgu ગુજરાતીht Kreyol ayisyenha Harshen Hausahaw Ōlelo Hawaiʻiiw עִבְרִיתhi हिन्दीhmn Hmonghu Magyaris Íslenskaig Igboid Bahasa Indonesiaga Gaeilgeit Italianoja 日本語jw Basa Jawakn ಕನ್ನಡkk Қазақ тіліkm ភាសាខ្មែរko 한국어ku كوردی‎ky Кыргызчаlo ພາສາລາວla Latinlv Latviešu valodalt Lietuvių kalbalb Lëtzebuergeschmk Македонски јазикmg Malagasyms Bahasa Melayuml മലയാളംmt Maltesemi Te Reo Māorimr मराठीmn Монголmy ဗမာစာne नेपालीno Norsk bokmålps پښتوfa فارسیpl Polskipt Portuguêspa ਪੰਜਾਬੀro Românăru Русскийsm Samoangd Gàidhligsr Српски језикst Sesothosn Shonasd سنڌيsi සිංහලsk Slovenčinasl Slovenščinaso Afsoomaalies Españolsu Basa Sundasw Kiswahilisv Svenskatg Тоҷикӣta தமிழ்te తెలుగుth ไทยtr Türkçeuk Українськаur اردوuz O‘zbekchavi Tiếng Việtcy Cymraegxh isiXhosayi יידישyo Yorùbázu Zulu