• Skip to main content
  • Skip to footer
  • Home

The May 13 Group

the next day for evaluation

  • Get Involved
  • Our Work
  • About Us
You are here: Home / Archives for cplysy

cplysy

May 06 2020

How to “Quantify” Qualitative Data

 

Let’s be clear: sums and frequencies are not the desired product of qualitative questions. In qualitative approaches, we want to describe, to present details and nuances and interesting outliers. But as evaluators, we need to do more than just report what is—we need to comment on what it means. In familiar evaluation terms, moving from the “what” to “so what?”

Qualitative purists may hiss at the idea of quantifying qualitative data. But as evaluators, our job is to apply evaluative thinking to our qualitative findings. Not all findings are as material as others—in other words, the one respondent who thought their nutrition class provided just the right amount of detail is likely overshadowed by the eleven who described feeling overwhelmed at the volume of information. Evaluators would be remiss not to introduce an element of quantification to their qualitative data.

Caveat: I do not intend to suggest that a higher number of respondents reporting a similar answer is always more important. Outliers and small groups matter, and understanding those outliers is a major part of why qualitative approaches are used.

But we do need to be able to describe the proportion of respondents who report similar answers.

The key to quantifying qualitative findings is consistency. Editing reports where descriptions of qualitative data included words like “a lot,” “the majority,” “many” and “most” left me wondering why those particular words were chosen. How is “a lot” different from “many?” Are “the majority” and “most” roughly the same number of respondents? And if I was asking those questions, I know our stakeholders would be asking them, too.

To give my staff concrete guidance, I found this framework… online… somewhere… maybe in 2013? (If this is your framework, or you know who created it, please let me know! I’ve been using these definitions in evaluation and reporting workshops for a few years, and have seen it used in Government of Canada documents, but without attribution.)

Few

Less than 10% of participants

Several

Less than 20%

Some

More than 20%

Many

Nearly 50%

A majority

More than 50%, but fewer than 75%

Most

More than 75%

Vast majority

Nearly all participants, with some still having different views

Unanimous, or almost all

All participants, or the vast majority gave similar answers and the rest did not comment


These definitions may work for you. Or you might take issue with some of the ranges and want to create your own. As I said before, consistency is key! Try using this framework in your next report, and include it in your methods appendix.


Sign up for our newsletter

We’ll let you know about our new content, and curate the best new evaluation resources from around the web!


We respect your privacy.

Thank you!


 

Written by cplysy · Categorized: evalacademy

May 04 2020

¿Hasta dónde se atrevería la evaluación para salvar el mundo?

Tomé un hilo de Twitter hoy de Dana Linnell Wanzer, en el que Dana indicaba que estaba leyendo el artículo de Robert Stake “¿Hasta dónde se atrevería la evaluación para salvar el mundo?“ sobre la defensa o promoción de la evaluación.

Stake argumenta que hay seis formas comunes en que l@s evaluador@s defienden o abogan por la evaluación:

1. Nos preocupamos y, a menudo, creemos en el evaluando. Esperamos encontrar el “programa evaluado” funcionando.

2. Nos importa la evaluación y queremos que otr@s se preocupen por ella. Promovemos la evaluación y la profesión.

3. Abogamos por, promovemos la racionalidad: lógica e imparcialidad.

4. Queremos que nuestro trabajo evaluativo sea escuchado y utilizado, y que los programas evaluados (y más allá) se apropien de él.

5. Nos preocupan por las personas vulnerables y con menos privilegios

6. Somos defensores de una sociedad democrática.

Dana Linnell Wanzer indicaba que no estaba completamente convencida de que estos dos últimos puntos sean ciertos para la mayoría de l@s evaluador@s o para la profesión. ¿Quizás no lo estamos haciendo en evaluación tan bien como deberíamos?, en el sentido de tratar la vulnerabilidad, la falta de privilegios y promover la equidad. Ella indica que tenemos mucho trabajo por hacer en ese campo.

Dana ni siquiera está segura de que la mayoría de l@s evaluador@s se sientan cómod@s con el papel de promoción o defensa de la evaluación que describió Stake: Stake afirmaba que l@s evaluador@s diferirán en sus enfoques y demás temas metodológicos, pero contábamos con esas 6 formas comunes en las que defendemos o promovemos la evaluación. ¿Son estos realmente puntos comunes?

Viene al caso en estos días en los que tratamos de ilusionarnos con lo ideal de un mundo en el que la evaluación pudiera tener un rol y ayudar a dar alguna respuesta o apoyo a la situación actual (Covid 19) o incluso más allá, por qué no, hasta el infinito y más allá de los Objetivos de Desarrollo Sostenible. Unos días en los que abogamos por el alineamiento de la función y comunidad evaluadora, aun sabiendo de nuestra fragmentación, istmos y carencias estructurales. En la que se habla de puntos comunes, teniendo como tenemos posiciones, situaciones, seguridades, intenciones, deseos y necesidades tan diferentes.

Tenemos por tanto todavía un camino por recorrer si queremos, no tanto salvar el mundo, sino colaborar para aportar algo…ya lo tratamos al comentar esta reflexión de Zenda Offir “Evaluador@s, transformando nuestra forma de ser y estar“…y esos 6 puntos señalados por Stake pueden ser también una guía para encontrar esos puntos comunes dentro de nuestras diferencias…encontremos liderazgos e incentivos reales y prácticos para ese tipo de impacto colectivo. Incentivos apropiados para ello, para ell@s, no sólo para la comunidad evaluadora, sino para resolver vulnerabilidades y mejorar nuestros marcos de gobierno.

Written by cplysy · Categorized: TripleAD

May 04 2020

Zero

I have long lobbied for museums to avoid using numbers as indicators of their success. I note in Intentional Practice for Museums: A Guide for Maximizing Impact that when museums boast their success with numbers, such as the number of annual exhibits and programs they offer, the dollars they add to their local economy, and the number of visitors they welcomed, they are sending the wrong message to whomever might be listening. These numbers are devoid of the most important and vital element of the museum experience—the quality and meaningfulness of the experience to people (the wonderful story in the LA Times about Ben Barcelona, the devoted museum-goer, comes to mind). How would museums report their on-site numbers today (take a look at the title of this post)?  Online counts have the same problem as onsite counts—they, too, are devoid of meaningfulness—which brings me to my second point: shouldn’t museums seek measures that are useful when there isn’t a crisis and when we are amidst one? 

Numbers had their purpose (they are easily gathered and understood), but they have outlived their usefulness. The silver lining: with nary a visitor to count inside the building, is now not the perfect time to rethink and change how your museum measures success? How does a museum arrive at metrics that will stand the test of time?

Numbers floating through space.
Image credit: Shutterstock

First, imagine the canvas blank, the slate clean. Don’t worry if you continue to see numbers floating aimlessly in your mind; that’s fine. Consider that numbers may need companions. Honor the numbers, and then imagine suitable partners, as in “numbers and . . .”

There may be a few ways to begin this imagination process. Here are two—a free-form approach and a structured approach.

 

Free-form approach: If your museum is accustomed to having important conversations about the purpose of your museum, where everyone’s input is sought and respected, then a free-form conversation could work. If that is the case, schedule a conversation with your museum family about alternative measures of success. Request or invite someone to facilitate the conversation and another person take notes. Explore and debate the value of the museum in your lives, others’ lives, and in the lives of people in your local community. Through dialogue, you may come to know the qualitative value of the museum, which can lead to articulating qualitative measures. Because I have facilitated many such dialogues, I have come to expect this enormously popular question, “but how will we measure that?” which is sure to prohibit further conversation or create a defeatist feeling (and we don’t need that right now!)  If the question emerges, someone can say, “a quality may be hard to measure, but it doesn’t mean it isn’t measurable; let’s not worry about that now; let’s continue going deep with qualitative measures.”

With notes in hand, skip to step 5 below.

Structured approach:  If your museum is more comfortable using a structured conversation rather than a free-form approach, here are several steps you might take with your colleagues:

Step 1: Set ground rules for an open dialogue.  Here are ones I might establish:

  • Mutually agree on a facilitator who will initiate the dialogue using the questions below and maintain focus and fairness.
  • Apply active inquiry and listening without passing judgement.
  • Listen to first understand, then respond (if you need clarification, respectfully ask follow-up questions).
  • Seek to be understood (think before you speak and carefully select your words).
  • Accept that process is an art and a science, which could create a bumpy conversation at times.

Step 2: Select a note-taker—someone to record people’s thoughts so you have data.

Step 3: Break the ice.  Rather than approach the question directly, dance around the question of quality to reduce stress and encourage free thinking. You may only need a few questions to get the dialogue going; then it will take on a life of its own. Here are a few questions to consider:

Talk about yourselves—why you do what you do is an important variable:

What about your work is most important to you?
Why is that important?
Why is that important?
Why is that important?

(If the online platform you use has rooms, ask staff to self-organize into interdisciplinary groups so each group can respond to the above questions. Then reconvene and share a synthesis of the conversations.)

Step 4: Ask deeper questions:

What unique capabilities do you bring to your daily work? How do you apply those unique qualities to create quality museum experiences?

What are the qualities of your museum’s most impressive museum experience?

Imagine the visitor experience within that context: What do you see visitors doing? What do you hear visitors talking about? How do they describe the meaningfulness of that experience?  How would you describe the meaningfulness of that experience?

 (Set aside 2 hours for steps 1 – 4)

Step 5: Analyze the notes.  Ask the museum’s most analytical person to review the notes with the goal of identifying the qualities that staff discussed. Share the list of qualities prior to the next gathering (Step 6).

Step 6: Discuss those qualities.  The goal of this step is to: prioritize qualities because ultimately, you want 3 or 4 qualities.  Less is more! This is hard work. Don’t give up.

Whew!  With 3 or 4 qualitative indicators of how your museum affects people’s lives, you have completed the hardest part of this work! Now, if you want, feel free to start thinking about measuring. Lookout for future posts to help you with that task.

The post Zero appeared first on RK&A.

Written by cplysy · Categorized: rka

May 04 2020

The COVID Slide

Last week, I watched a powerful webinar from Ohio State’s Wexner Medical Center about health inequity and COVID-19. One of the first speakers, Dr. Nwando Olayiwola, started to talk about vulnerable populations but quickly corrected herself. She called them, “populations that have been made vulnerable.” 

What a difference such a small change made. Instead of assuming that people in those populations are inherently vulnerable, her corrected phrase shows that inequity is the result of intentional decisions that negatively affect specific groups of people. Her quick self-correction stuck with me. 

This morning, I was reading the education news that comes to my inbox each day and saw a headline from the Wall Street Journal last week that read, “Some School Districts Plan to End the Year Early, Call Remote Learning Too Tough.” Another entry in the same newsletter suggested that based on a national poll of teachers and administrators, 65% want to start the school year as normal in the fall, without adjustments to the curriculum or schedule. My heart sank, and I instantly thought of Dr. Olayiwola’s revised phrase. 

Now, I am no longer a K-12 teacher, so it is unfair for me to pass judgment on educators, whose jobs were already difficult, and who had to do a 180-degree shift in their daily practices within days or weeks. That is an incredible challenge, and it will take time to adjust. I empathize with teachers and cannot imagine what I would have done if I had to shift my middle school instruction online in a heartbeat. However, we’re also faced with a growing educational crisis.

Evaluators and researchers have been doing some great work to illuminate the educational ramifications of COVID-19 on disadvantaged communities. Researchers have studied the concept of “summer slide” for many years and have shown students regress in math and reading skills without educational opportunities during the summer. There is even a National Center on Time and Learning, whose work revolves around reducing inequities related to the inflexible and insufficient school year schedules that predominate in our country. Recently, the Collaborative for Student Growth found that an even more significant “COVID slide” is likely to occur when students return to school in the fall, having retained only about 70% of the reading growth and 50% of the math growth they would have typically made in a school year. Recommendations to mitigate the COVID slide include summer school and additional learning time for students. 

Yet, the Wall Street Journal article discussed how districts across the country are choosing to end the school year up to three weeks early in order to have more time to prepare for the fall. One superintendent even stated, “It made sense to us to get rid of the stress and get ready for the following school year.”  We certainly need more supports for teachers and greater access to technology for students in order to make online learning more functional and effective. But does that justify giving up entirely?

Decades of research have shown that more time in school leads to better outcomes for students, especially those from low-income communities. In a time of increased risk for widening the achievement gap, is it ethical to throw in the towel? By justifying inaction with the feeling that it is too difficult to educate students from home, we are making our student populations — in many cases — more vulnerable than they already were. I hope that evaluators and researchers can come together with educators to study the data, fully understand the problem and its drivers, and develop policy recommendations that will not only support teachers but will also do right by students who need time with teachers to thrive.

Written by cplysy · Categorized: engagewithdata

May 01 2020

Three Types of Evaluation for Nonprofits (Simple Overview)

Written by cplysy · Categorized: connectingevidence

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 265
  • Go to page 266
  • Go to page 267
  • Go to page 268
  • Go to page 269
  • Interim pages omitted …
  • Go to page 304
  • Go to Next Page »

Footer

Follow our Work

The easiest way to stay connected to our work is to join our newsletter. You’ll get updates on projects, learn about new events, and hear stories from those evaluators whom the field continues to actively exclude and erase.

Get Updates

Want to take further action or join a pod? Click here to learn more.

Copyright © 2026 · The May 13 Group · Log in

en English
af Afrikaanssq Shqipam አማርኛar العربيةhy Հայերենaz Azərbaycan dilieu Euskarabe Беларуская моваbn বাংলাbs Bosanskibg Българскиca Catalàceb Cebuanony Chichewazh-CN 简体中文zh-TW 繁體中文co Corsuhr Hrvatskics Čeština‎da Dansknl Nederlandsen Englisheo Esperantoet Eestitl Filipinofi Suomifr Françaisfy Fryskgl Galegoka ქართულიde Deutschel Ελληνικάgu ગુજરાતીht Kreyol ayisyenha Harshen Hausahaw Ōlelo Hawaiʻiiw עִבְרִיתhi हिन्दीhmn Hmonghu Magyaris Íslenskaig Igboid Bahasa Indonesiaga Gaeilgeit Italianoja 日本語jw Basa Jawakn ಕನ್ನಡkk Қазақ тіліkm ភាសាខ្មែរko 한국어ku كوردی‎ky Кыргызчаlo ພາສາລາວla Latinlv Latviešu valodalt Lietuvių kalbalb Lëtzebuergeschmk Македонски јазикmg Malagasyms Bahasa Melayuml മലയാളംmt Maltesemi Te Reo Māorimr मराठीmn Монголmy ဗမာစာne नेपालीno Norsk bokmålps پښتوfa فارسیpl Polskipt Portuguêspa ਪੰਜਾਬੀro Românăru Русскийsm Samoangd Gàidhligsr Српски језикst Sesothosn Shonasd سنڌيsi සිංහලsk Slovenčinasl Slovenščinaso Afsoomaalies Españolsu Basa Sundasw Kiswahilisv Svenskatg Тоҷикӣta தமிழ்te తెలుగుth ไทยtr Türkçeuk Українськаur اردوuz O‘zbekchavi Tiếng Việtcy Cymraegxh isiXhosayi יידישyo Yorùbázu Zulu