• Skip to main content
  • Skip to footer
  • Home

The May 13 Group

the next day for evaluation

  • Get Involved
  • Our Work
  • About Us
You are here: Home / Archives for allblogs

allblogs

Apr 24 2021

Control de calidad de las evaluaciones durante el desarrollo (o adaptativas)

La Oficina de Evaluación de UNFPA ha desarrollado un nuevo y relevante “Marco de Garantía de calidad y valoración de las evaluaciones durante el desarrollo (o adaptativas) en UNFPA” para ayudar a la Oficina de evaluación y a terceros a valorar la calidad de las evaluaciones durante el desarrollo (o evaluaciones adaptativas) ejecutadas por/para la agencia. En este informe se señalan algunas tensiones que hay que afrontar para garantizar la calidad de las evaluaciones durante el desarrollo (o adaptativas)

Varias tensiones son inevitables en la aplicación de los principios de la evaluación durante el desarrollo en las evaluaciones gestionadas por organizaciones y afectarán la calidad de la evaluación. Estas tensiones no pueden (re) resolverse; más bien, deben abordarse como desafíos de diseño y gestionarse de forma creativa.

Grandes tensiones: Existen tres tensiones principales para garantizar la calidad en las evaluaciones de desarrollo gestionadas por las organizaciones. Se relacionan con (a) propósito, (b) relaciones y (c) gestión y/o administración.

1 Tensiones relacionadas con el propósito

La tensión de propósito principal es (a) entre la evaluación durante el desarrollo como un proceso interno para apoyar el desarrollo y (b) la adaptación de una intervención y el uso más tradicional de la evaluación como un mecanismo para informar y rendir cuentas a un organismo superior o externo.

(1) Por un lado, el objetivo de la evaluación durante desarrollo es (a) proporcionar a las partes interesadas de la iniciativa retroalimentación en tiempo real sobre los resultados de sus esfuerzos, (b) generar nuevos aprendizajes sobre los desafíos que están tratando de abordar, identificar y (c) reflexionar críticamente sobre las fortalezas y debilidades en su trabajo y (d) para permitir adaptaciones basadas en datos a sus objetivos y enfoque general. Esto incluye la necesidad de que (e) los usuarios principales de la evaluación utilicen conjuntamente los hallazgos, las implicaciones y las opciones que surgieron en el proceso para tomar decisiones sobre dónde y cómo desarrollar la intervención, en lugar de confiar en las recomendaciones del equipo de evaluación.

(2) Por otro lado, las prácticas y la cultura de evaluación de las organizaciones suelen girar en torno a un enfoque de evaluación formativo y sumativo más tradicional, y adoptan el principio de informar públicamente los hallazgos y resultados de la evaluación, generalmente a los altos responsables de la toma de decisiones y / o organismos externos, señalando conclusiones concretas y recomendaciones. Esta orientación tradicional puede dificultar que los participantes y evaluadores durante el desarrollo adopten la autorreflexión crítica, fundamental para el éxito de la evaluación del desarrollo.

El desafío de diseño para las partes interesadas involucradas en las evaluaciones durante el desarrollo organizacional es el siguiente: ¿Cómo se pueden gestionar las evaluaciones durante el desarrollo organizacional (a) de una manera que mantenga un enfoque interno en el desarrollo y la adaptación de una intervención, (b) mientras se opera en un entorno institucional diseñado para una evaluación orientada de forma más tradicional, informado públicamente, externamente ?

2 Tensiones en las relaciones de los evaluadores

Los evaluadores involucrados en contextos de desarrollo deben navegar por la tensión entre varios principios de calidad clave relacionados con las relaciones entre los evaluadores y los usuarios principales: cocreación, imparcialidad e independencia.

(1) Por un lado, la evaluación durante el desarrollo, orientada a la utilidad, requiere que los evaluadores co-creen el diseño de la evaluación con los usuarios principales. Esto significa (a) seleccionar fuentes de datos y métodos que reflejen las preferencias “filosóficas y organizativas” de los usuarios, (b) empleando procesos facilitados para la comprensión compartida de los hallazgos y (a menudo) facilitado o co-desarrollo de las implicaciones para el desarrollo posterior de la intervención (p. Ej. puntos de apalancamiento, opciones, escenarios, preguntas adicionales).

(2) Por otro lado, los evaluadores deben poder operar sin la influencia indebida de las partes interesadas de la evaluación (por ejemplo, usuarios principales, comisionados, beneficiarios), incluso en la elección de los métodos de evaluación para la recopilación y el análisis de datos, la discusión de los hallazgos y la preparación de conclusiones y camino a seguir.

El desafío de diseño para las partes interesadas involucradas en las evaluaciones durante el desarrollo es: ¿Cómo pueden los “evaluadores durante desarrollo” (a) crear conjuntamente diseños de evaluación de manera significativa e involucrar a los usuarios de la evaluación en otras prácticas centradas en la utilización, (b) al tiempo que garantizan su propia independencia e imparcialidad continuas en el proceso?

3 Tensiones en la gestión y/o administración

La Oficina de Evaluación de la Organización y los evaluadores durante el desarrollo contratados deben lidiar con las tensiones en la gestión, adquisición y administración de las actividades de evaluación durante el desarrollo.

(1) Por un lado, para que sean útiles, los diseños de evaluación del desarrollo deben poder adaptarse para reflejar los objetivos, las preguntas y el contexto en evolución de las partes interesadas de la iniciativa. De lo contrario, se establece el “rigor-mortis”, lo que reduce la utilidad del diseño. En estos casos, las partes interesadas de la iniciativa  (i) ignoran por completo los resultados de la evaluación o, en algunos casos, (ii) socavan el esfuerzo de desarrollo al presionar a las partes interesadas para que adopten un diseño de evaluación que ya no es relevante.

(2) Por otro lado, las evaluaciones organizacionales se guían por políticas, directrices y prácticas para encargar evaluaciones que requieren que los contratos de evaluación identifiquen cuidadosamente las actividades de evaluación y los entregables, los plazos clave para su finalización y los calendarios de desembolsos que se organizan en torno a la presentación y aprobación de los resultados clave de la evaluación o productos (por ejemplo, informes).

El desafío de diseño para la evaluación durante el desarrollo organizacional es el siguiente: ¿Cómo pueden la Oficina de Evaluación de la Organización y los evaluadores desarrollar un diseño de evaluación preliminar (a) que sea lo suficientemente completo como para finalizar la financiación y un contrato de evaluación, pero (b) que sea lo suficientemente flexible como para adaptarse en tiempo real para reflejar (i) la evolución de las partes interesadas y (ii) las necesidades de evaluación en evolución?

Más herramientas van surgiendo para operacionalizar y aterrizar las evaluaciones durante el desarrollo: Positivo para apoyar los procesos de desarrollo, entre sus políticas, su “economía política” y las singularidades de cada uno sus procesos. La procesión va por dentro…

Written by cplysy · Categorized: TripleAD

Apr 21 2021

Why you shouldn’t rely on default survey platforms to give you all the answers

 

So you’ve administered a survey using one of the many online survey platforms that claim to help you create a survey, analyze the results, and export your results so you can make data-driven decisions. With tools like these, who needs an evaluator! Right?

Don’t get us wrong, surveys are useful tools and we’re a fan of any survey platform that makes it easier to use the results. But what about when you want to scratch beneath the surface or present a legible graph that will convince the program director or funder that action needs to be taken? This is where the canned survey tools start to falter.

Just like the default settings in Microsoft Excel or Word, the default report settings in most survey platforms give you some quick and dirty results, good for getting an overall sense of what’s going on. But sometimes you’ll need to present your data in a different way.

In this two-part series, we’ll take you through the steps to export and format your data so that it’s useable, then we’ll teach you some of our favourite data cleaning and analysis tips!


First, let’s review why you might want to look beyond the survey platform’s pre-made graphs and analysis for your answers.

Here’s an example graph from a survey platform; there was a total of 6 responses for this question:

Picture1.png

In this graph alone:

  • The graphs default to %, which over-inflates results when you have small numbers of respondents (usually we say less than 10 respondents).

  • Colours are often random and don’t highlight key findings.

  • While the response categories make sense for gathering the data, they get confusing when you are visualizing the data.

A quick makeover shows something a little different:

Picture2.png

Other issues with the default graphs can include:

  • Inability to compare answers across questions— did people who answered one question one way answer another question in the same way?

  • Inability to combine results across surveys. Say you have a survey after each event you host and want to look at the results of all of the surveys together.

So now what? You go to download your survey results and are faced with a variety of options- do you download responses by question? By respondent? PDF? CSV? Excel? Have no fear, we’ll guide you through this process. 

Step One:

Download the most granular responses available. Select responses by respondent so that you are able to trace responses by participant and link a participant’s response to one question (e.g. demographics, like age or gender) to their response for another question.

Step Two:

Download the responses in an editable format. We prefer Excel, but CSV works as well since you can save a CSV file to Excel. Please, please, please, don’t download the PDF version.

Step Three:

Go forth and analyze with a few tips from us:

1. Label the Questions Clearly

 Sometimes the survey platform exports information in a way that makes it hard to tell what the survey question was; either by importing a very long question or cutting out parts of the question stem. Save yourself the hassle of flipping between your survey and your data by clearly labelling the questions so you understand what questions are linked with the answers. 

Here’s an example of how column headers show up on exported data from one survey platform:

Picture3.png

 

Here’s how we suggest formatting it with some simple tweaks:

Picture4.png

2. Leave a bread crumb trail

Want to analyze the same survey delivered a few times, like multiple post-event surveys? We suggest merging them into a single file. However, it’s important to track where the surveys came from. Make one of your columns a description of where the data came from (e.g. March 31 session, Group A session). This allows you to go back to the original data if necessary and allows you to analyze the survey answers by source.

Picture5.png

Notice how in this example, the session date in column A tells you which post-event survey the data is linked to.

3. Keep your data organized

Create a separate “analysis” tab, or even multiple analysis tabs (e.g. demographic analysis, skill analysis) to organize your data. Label your tables and charts in the analysis tab by survey question so you can quickly scroll to find what you are looking for. The goal is to be able to find what you are looking for quickly, with minimal scrolling or clicking between tabs.

Here’s an example of what a survey analysis spreadsheet might look like. All data are in one tab in this example, with separate visualization/analysis tabs for each section of the survey (demographics, resources, mentoring, knowledge). 

Picture6.png

 

4. Let Excel do the math

Level up your Excel skills with a few formulas and tricks using tips from the next article in our series. 


Now that you’ve got a handle on exporting and organizing the data so that it’s more useful, check out our favourite tips and tricks for cleaning and analyzing data (article to come May 2021) and this article on dialing down your data so that people understand what to look at.

 

To learn more about applying evaluation in practice, check out more of our articles, or connect with us over on Twitter (@EvalAcadmey) or LinkedIn.


Sign up for our newsletter

We’ll let you know about our new content, and curate the best new evaluation resources from around the web!


We respect your privacy.

Thank you!


 

Written by cplysy · Categorized: evalacademy

Apr 20 2021

“Whose Evidence Is It?”

“Whose Evidence is It?”

I’ve been talking a lot about evidence lately, but I feel I’ve been missing a crucial piece.

As a reminder, the Every Student Succeeds Act (ESSA), our country’s primary federal education legislation, requires that programs and interventions purchased with federal education dollars must have be evidenced-based.

Most people know these federal dollars better as “Title I,” or the provision of the Act that allocates funds for schools and districts with large percentages of students from low-income families, although there are other funding streams as well.

Title I has always been a great way for small, community-based, and/or minority-owned organizations to work with schools in service of students and families who can benefit most from their support.

And now, those who have not been formally evaluated cannot qualify for these funds. 

Not only is that a hit to their financial lifeline, but it both reduces their access to the kids and families they serve and their ability to effectively serve them. 

That’s why I love working with small organizations who are new to evaluation work – to help them use data to tell their story and show why they qualify for the use of those funds. I’ve always seen it as an equity strategy.

In the fall, I was captivated by the keynote speaker at the Crane Center’s 2020 Symposium on Children, Dr. Iheoma Iruka. Here’s what Dr. Iruka said that stopped me in my tracks: “Whose evidence is it?”

She talked about the long history of maltreatment and discrimination in the collection of evidence, especially in medicine, and particularly for marginalized and minoritized communities.

Often, the organizations that could (and still can) afford the time, resources, and expertise to conduct rigorous evaluations did not represent or look out for the best interests of the people they were studying — and many did the exact opposite.

It made me realize that “evidence-based” can be a loaded term — one that brings up a lot of pain and distrust for many groups of people. 

Yet this term is here to stay for now, as most federal agencies have some requirement that their funds get spent on activities and interventions that can demonstrate their impact.

Now, this awful history is not my area of expertise, but it is one that I am trying to learn more about, for my own professional and personal growth.

However, here’s what I can say: we don’t have to rely on the research and evidence that exists in the world.

By developing our own base of evidence, we can ensure that all students and families are treated well and represented fairly and equitably in evaluations and program delivery.

And that starts with organizations doing the good work with kids and families ensuring that they, too, have a spot on those lists of evidence-based interventions. 

I know that’s easier said than done.

But don’t worry – it doesn’t have to be! I’ve got two great (read: FREE) opportunities for your organization to begin its evidence-based journey!

Starting tomorrow – and continuing through May, I’ll be hosting a workshop series on this exact topic, sponsored by the Maryland Out of School Time (MOST) Network! The first session — an ESSA overview — is tomorrow (4/21) at 10 AM EST.  See the image below for the full schedule. ​

Picture

And of course, you can always sign up for our free email mini-series, Evidence for Engagement, which will walk you through how to become a Level-4 approved vendor.

Archives


Select Month May 2021 April 2021 March 2021 February 2021 January 2021 December 2020 November 2020 October 2020 September 2020 August 2020 July 2020 June 2020 May 2020 April 2020 March 2020 February 2020 January 2020 December 2019 November 2019 October 2019

Tags

Attendance
Barriers to Success
Community Schools
Current Events
Data Tracking
Education
ESSA
Evaluation
Family Engagement
Inequality
Microsoft Excel
Professional Learning
Reflections
Surveys
Tools and Resources
Updates

Written by cplysy · Categorized: engagewithdata

Apr 20 2021

“Whose Evidence Is it?”

I’ve been talking a lot about evidence lately, but I feel I’ve been missing a crucial piece. 

As a reminder, the Every Student Succeeds Act (ESSA), our country’s primary federal education legislation, requires that programs and interventions purchased with federal education dollars must have be evidenced-based.

Most people know these federal dollars better as “Title I,” or the provision of the Act that allocates funds for schools and districts with large percentages of students from low-income families, although there are other funding streams as well.

Title I has always been a great way for small, community-based, and/or minority-owned organizations to work with schools in service of students and families who can benefit most from their support.

And now, those who have not been formally evaluated cannot qualify for these funds. 

Not only is that a hit to their financial lifeline, but it both reduces their access to the kids and families they serve and their ability to effectively serve them. 

That’s why I love working with small organizations who are new to evaluation work – to help them use data to tell their story and show why they qualify for the use of those funds. I’ve always seen it as an equity strategy. 

In the fall, I was captivated by the keynote speaker at the Crane Center’s 2020 Symposium on Children, Dr. Iheoma Iruka. Here’s what Dr. Iruka said that stopped me in my tracks: “Whose evidence is it?”

She talked about the long history of maltreatment and discrimination in the collection of evidence, especially in medicine, and particularly for marginalized and minoritized communities. 

Often, the organizations that could (and still can) afford the time, resources, and expertise to conduct rigorous evaluations did not represent or look out for the best interests of the people they were studying — and many did the exact opposite. 

It made me realize that “evidence-based” can be a loaded term — one that brings up a lot of pain and distrust for many groups of people. 

Yet this term is here to stay for now, as most federal agencies have some requirement that their funds get spent on activities and interventions that can demonstrate their impact.

Now, this awful history is not my area of expertise, but it is one that I am trying to learn more about, for my own professional and personal growth. 

However, here’s what I can say: we don’t have to rely on the research and evidence that exists in the world.

By developing our own base of evidence, we can ensure that all students and families are treated well and represented fairly and equitably in evaluations and program delivery.

And that starts with organizations doing the good work with kids and families ensuring that they, too, have a spot on those lists of evidence-based interventions. 

I know that’s easier said than done. 

But don’t worry – it doesn’t have to be! I’ve got two great (read: FREE) opportunities for your organization to begin its evidence-based journey! 

Starting tomorrow – and continuing through May, I’ll be hosting a workshop series on this exact topic, sponsored by the Maryland Out of School Time (MOST) Network! The first session — an ESSA overview — is tomorrow (4/21) at 10 AM EST.  See the image below for the full schedule. ​

Picture

And of course, you can always sign up for our free email mini-series, Evidence for Engagement, which will walk you through how to become a Level-4 approved vendor. 

Written by cplysy · Categorized: engagewithdata

Apr 19 2021

Improving Our Museum Labels Through A Harm Reduction Lens: Part 2

By: Rachel Nicholson

“If you want to go fast, go alone; but if you want to go far, go together.”

– African Proverb

In my last post, I wrote about harm reduction as a philosophy and how it might be applied to rethinking museum labels. In this post, I’ll explain just how we started these conversations at the Nelson-Atkins and put our ideas in action.

This project is on-going, and we have certainly stumbled along the way, but central to our process has been a commitment to collaboration and shared learning across our institution. While the three of us working in Interpretation could have identified problematic labels, taken them down, and swapped them out for new ones relatively quickly, we saw this as an opportunity to build common language around harm across the museum and collectively imagine new guidelines and principles for our interpretative text.

To deepen our learning and ensure we included diverse voices and experiences in the process, we invited staff members who do not traditionally work on labels to our conversations and workshops. By asking for perspectives from colleagues in the Public Programs, School and Educator Programs, Visitor Services, and Design departments (to name a few) at different points in this process we broadened our understanding of how different people can experience a work of art and its accompanying label. Many of our colleagues outside of Curatorial and Interpretation are also personally members of groups that may have experienced harm in museums.

What do we mean by harm?

Step one, which took place from June to August 2020, was to identify what we at the Nelson-Atkins mean by harm in language and how we see it manifesting in our existing labels. To achieve this, we met with Curators, Interpretive Planners, and members of our Community and Access and School and Educator teams. Dividing into breakout rooms, we started with the question “How can art museum labels do harm?”

This broad question led to a specific list of ways labels can cause harm, including:

  • Assuming a generalized experience of the world, that the white male is the basic human experience, that whiteness is the norm
  • Subjective judgements or comments on physical appearance or value judgements on beauty without historical context
  • Passive voice sentence construction that generalizes or avoids naming an offender
  • Vocabulary and terminology that reinforces hegemonic power and exclusionary white narratives (e.g., Exotic, Discover, Westward expansion and progress)
  • Non-people first language; language that puts a social identity before the person

From this initial conversation, we asked our Curatorial colleagues to mine the permanent collection galleries and identify labels they’d like to replace, based on our agreed upon list of what can cause harm in labels. Not surprisingly, their critical examinations necessarily broadened our definition. The list below shows the additional types of harm they identified.

A list of 14 types of harm written in black text on a white background.
Our list of all types of harm identified by curators across collections.

Working off their notes, we in Interpretation mapped the types of harm cause by labels across different permanent collections. This reinforced the notion that strategies for improving our labels can be implemented across the museum, rather than department by department. As visitors tend to not think about collections separately but rather wander between galleries, we needed to think about our approach to labels and harm holistically.

A chart with the 14 types of harm in black text on a blue background in the first column. Our 5 collection areas on the top row in white text on blue background. X’s across the chart show how the types of harm appear in different and overlapping areas of the collection.
Our chart of the types of harm showing how they mapped across our collections.

In grouping types of harm, we also realized we could broadly place “harm done by label language” into two main categories: what we say and what we don’t say. In our next step, we chose to focus on these two arenas, acknowledging that accessible fonts and design were also necessary pieces for improving the gallery experience for all visitors. 

A large, light yellow circle with two smaller circles, one blue and one yellow, inside. Black text inside the large circle reads “delivery: damaged label, poor lighting, inaccessible font.” Black text inside the smaller blue circle reads “What we don’t say” and black text inside the smaller yellow circle reads “What we say.”

Our diagram showing the overlap between how language can cause harm, the major focus of our label workshops.

The Challenge of Collaborating While Working Remotely

To continue this conversation of harm cause by labels across our collections, our team organized cross-divisional label workshops. We paired Curatorial departments and chose a problematic label from each, one dealing with harm through omission and one dealing with harm through explicit language.

Working again in small groups of two Interpretive Planners (one facilitating and one taking notes) and with four to six Curators, we used Google Docs to collaboratively edit in real time and ask questions of the current labels, first identifying the harm together and then imagining how we might rewrite the labels. Our format was simple: identify harm individually through collaborative editing, then ask questions of the object and its label, first as a visitor and then as an “expert.” While working remotely has made it hard to have creative meetings like this, a tool like Google Docs helped us create a sense of shared work and brainstorming that is often missing from video meetings.

A screenshot of a Google Doc that we used in our label workshop. At the left is the text of the current label, in the middle is an image of a Southeast Asian bronze sculpture, and at the right are some of the comments from workshop participants.
We used Google Docs to collaboratively edit in real time with colleagues.

Grounding each workshop in specific example labels proved very useful. These conversations about how we improve labels can get abstract very quickly. Being able to all focus on a single label at a time, agree on how we saw harm in the label, and then imagine something new helped us focus. The collaborative editing also added an element of fun and experimentation by creating a shared activity even though we were not physically in the same room.

The workshops proved so effective that our Director, upon hearing about them, asked to be included. Organizing around a shared activity helped create horizontal communication. Our Director jumped in and was able to share his own thoughts on improving our labels as well as hear how colleagues were thinking through the process. 

Identifying Patterns and Principles

From these workshops, groups consistently came back to the question of “What can cause harm?” Across conversations, we began to recognize patterns including over-generalization, oversimplification, and simply trying to say too much could lead to, in the worst case, harmful labels or, in the best-case scenario, boring or unhelpful labels.

For instance, in the above example the sculpture, from our Southeast Asian collection, was referred to as a “superb example of Chola workmanship” and yet we never gave evidence for why it is a superb example or even what Chola workmanship is. Not only does the label assume knowledge from the visitor, it is general and does not really tell us anything about this particular Chola Dynasty Bronze, thus not doing justice to a legacy of art-making and cultural tradition.

The label was originally identified as harmful because it equates a “dwarf” with “ignorance.” Once we began discussing the object as a group, we realized we could go beyond correcting this harmful language and explore the most interesting pieces of this sculpture, information we were not providing in the current, over-generalized label.

A screenshot of a Google Doc that we used in our label workshop. At the left is the text of the current label, in the middle is an image of Gauguin’s painting of a Tahitian woman, and at the right are some of the comments from workshop participants.
Our workshop on our current Gauguin label helped us focus on the kinds of stories we might tell and how we could reorient the label to be about the subject rather than the artist.

In another case, as we discussed a label for Paul Gauguin’s Faaturama that referenced France’s colonization of Tahiti but did not explicitly mention Gauguin’s role in this power dynamic, we identified just how much space was wasted with general terms and passive voice. We realized that we could do much more in 90 words if we used direct and specific language to name violent and harmful histories.

Beyond using passive voice, the label also entirely focused on Gauguin, removing any discussion of the female subject of the painting. Through omitting her story, the label was reinforcing the power dynamic between Gauguin and his Tahitian subject. What if, we asked, we rewrote the label from the perspective of the woman, inviting visitors to reflect and consider why we know so little about her. While we often want to share the information we have (in this case information about Gauguin), this could be an opportunity to invite new, unresolved interpretations of the work, perhaps ones that create more questions than provide specific answers.

In the next post I’ll explore how these conversations helped us to develop Principles for Interpretative Text. What if, we asked, a label did not present a neat and simple story but rather opened space for further questioning and interpretation by visitors?

About the Author

Rachel Nicholson is the Director, Interpretation, Evaluation & Visitor Research at the Nelson-Atkins Museum of Art.  You can reach her at rnicholson@nelson-atkins.org. Every two weeks throughout April and May 2021, Rachel will share her team’s efforts to rewrite the Nelson-Atkins’ permanent collection gallery labels through a harm reduction lens. Read her first post here.

Don’t want to miss a post? Subscribe to our blog to get new posts delivered to your inbox (just fill out the form at the right)! 

The post Improving Our Museum Labels Through A Harm Reduction Lens: Part 2 appeared first on RK&A.

Written by cplysy · Categorized: rka

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 190
  • Go to page 191
  • Go to page 192
  • Go to page 193
  • Go to page 194
  • Interim pages omitted …
  • Go to page 310
  • Go to Next Page »

Footer

Follow our Work

The easiest way to stay connected to our work is to join our newsletter. You’ll get updates on projects, learn about new events, and hear stories from those evaluators whom the field continues to actively exclude and erase.

Get Updates

Want to take further action or join a pod? Click here to learn more.

Copyright © 2026 · The May 13 Group · Log in

en English
af Afrikaanssq Shqipam አማርኛar العربيةhy Հայերենaz Azərbaycan dilieu Euskarabe Беларуская моваbn বাংলাbs Bosanskibg Българскиca Catalàceb Cebuanony Chichewazh-CN 简体中文zh-TW 繁體中文co Corsuhr Hrvatskics Čeština‎da Dansknl Nederlandsen Englisheo Esperantoet Eestitl Filipinofi Suomifr Françaisfy Fryskgl Galegoka ქართულიde Deutschel Ελληνικάgu ગુજરાતીht Kreyol ayisyenha Harshen Hausahaw Ōlelo Hawaiʻiiw עִבְרִיתhi हिन्दीhmn Hmonghu Magyaris Íslenskaig Igboid Bahasa Indonesiaga Gaeilgeit Italianoja 日本語jw Basa Jawakn ಕನ್ನಡkk Қазақ тіліkm ភាសាខ្មែរko 한국어ku كوردی‎ky Кыргызчаlo ພາສາລາວla Latinlv Latviešu valodalt Lietuvių kalbalb Lëtzebuergeschmk Македонски јазикmg Malagasyms Bahasa Melayuml മലയാളംmt Maltesemi Te Reo Māorimr मराठीmn Монголmy ဗမာစာne नेपालीno Norsk bokmålps پښتوfa فارسیpl Polskipt Portuguêspa ਪੰਜਾਬੀro Românăru Русскийsm Samoangd Gàidhligsr Српски језикst Sesothosn Shonasd سنڌيsi සිංහලsk Slovenčinasl Slovenščinaso Afsoomaalies Españolsu Basa Sundasw Kiswahilisv Svenskatg Тоҷикӣta தமிழ்te తెలుగుth ไทยtr Türkçeuk Українськаur اردوuz O‘zbekchavi Tiếng Việtcy Cymraegxh isiXhosayi יידישyo Yorùbázu Zulu