• Skip to main content
  • Skip to footer
  • Home

The May 13 Group

the next day for evaluation

  • Get Involved
  • Our Work
  • About Us
You are here: Home / Archives for allblogs

allblogs

Aug 12 2020

Evaluation Has a Racism Problem – What Can We Do About It?

 

This article is a summary and discussion of Caldwell and Bledsoe’s 2019 paper in the American Journal of Evaluation called “Can Social Justice Live in a House of Structural Racism? A Question for the Field of Evaluation.” The original article contains the research and citations to back up these claims.

Racists vs. Racism

Before we can talk about possible strategies to address racism in evaluation, we need to make a very important distinction between individual “racists” and structural “racism.”

A racist individual is someone who holds racist beliefs, such as biases against certain races or negative opinions based on race. For example, the belief that one race of people is less intelligent than another race of people. A racist (or someone with racist beliefs) may act on these beliefs by committing acts of bigotry, hate crimes, or even violence.

Structural racism, on the other hand, occurs at the level of social systems (not individuals). In the words of Ibram X. Kendi, it is like a rain that falls on everyone in a society – no one is immune to or exempt from structural racism. Racism is present in institutions and systems of power, such as unfair laws or the discriminatory practices of schools, workplaces, or government agencies. Racism can also be present within a society’s culture in the form of ideologies or myths that systematically advantage white people and disadvantage people of colour. For example, the overwhelming depiction of people of colour in mainstream media as criminals encourages discrimination and unequal treatment of individuals.

This distinction matters because when we talk about racism, we are usually not talking about racist individuals – we are talking about structural racism in our culture and institutions.

This means that people can participate in systems of racism without holding racist beliefs themselves. For example, research on implicit bias suggests that many of our decisions around racial stereotypes happen in a split second without our awareness. Because we are all living in the “rain” of structural racism, it is likely we have internalized some of these biases and stereotypes, even if we believe ourselves to be nice, fair, unbiased individuals. We are also bound by procedures, policies, and laws that may be racist – so even if we are not racist individuals, our actions are limited by structural racism.

Illustration:  Bonnie Kate Wolf

Illustration: Bonnie Kate Wolf

Racism in evaluation

Caldwell and Bledsoe trace the history of evaluation as an academic field, and identify the ways it, too, has been soaked in the “rain” of racism. For example:

  • Western perspectives and assumptions are ingrained in evaluation methods and theory (e.g., what is defined as credible and valid data).

  • Evaluation as a field has historically excluded diverse, non-Eurocentric ways of knowing (e.g., cultures that use a logic system that is circular rather than linear).

  • In evaluation, it is generally accepted that issues can be defined and “solved” by social scientists who do not understand or value the life experiences of people of colour.

  • Evaluations are based on an academic discipline that was born out of colonialism, slavery, segregation, and apartheid (e.g., research and evaluation was used to support the racist claim that Black people were inherently less intelligent than White people).

This is not to say all of evaluation is racist, because that would ignore the many contributions of evaluators who are black, indigenous, and people of colour (BIPOC).

Some responses to these issues of racism in the field include culturally responsive evaluation, indigenous evaluation, and equitable evaluation (EE). However, these perspectives are generally seen as “optional” and are not necessarily the norm in evaluation. It will take intentional work to undo these systems of racism and make the field anti-racist.

 

Strategies to eradicate racism

If we accept that “all evaluators, regardless of demographic designation, are subject to perpetuating structural and institutional racism, found in the history and systems of the profession,” the question becomes: what can we do about it?

The authors propose a suite of strategies to unravel racism within evaluation, such as:

  1. Include culturally diverse perspectives in evaluation theory, practice, and education.

  2. Normalize social justice methods and theories in the field of evaluation.

  3. Make changes to professional organizations (like the AEA or CES), for example:

    • Only nominate professionals for awards and presentations who have applied a social justice framework in their work,

    • Provide training programs with a social justice perspective,

    • Reject manuscripts that do not address social justice, equity, or culture; and

    • Make social justice a criterion for accreditation of evaluators.

  4. Funders of evaluations require a social justice statement or equitable evaluation methods in their requests for proposals.

  5. Expand training programs, conference themes, and workshops focused on racial equity and inequality.

Part of the solution to racism in the evaluation profession is improving the standards and expectations of our professional organizations so they align with social justice and equity. Changing these sorts of systems is difficult and complex, but it can only happen if people demand it. I encourage you to think about the professional organizations you are a part of, and how you can use your power to move these conversations forward.

These may sound like radical changes (because they are!), but they are necessary to eradicate structural racism in our profession and society.

 

Source

Caldwell, L. D., & Bledsoe, K. L. (2019). Can Social Justice Live in a House of Structural Racism? A Question for the Field of Evaluation. American Journal of Evaluation, 40(1), 6–18. https://doi.org/10.1177/1098214018815772

 

A note about the author

This article was written by Nick Yarmey, a white settler living in Treaty 6 territory of what is currently called Canada. I write this with the acknowledgement that I do not have the lived experience of being black, indigenous, or a person of colour (BIPOC). However, I based this article on what I have learned from BIPOC authors and researchers in the interest of taking on some of the labour of explaining these concepts, especially to other white folks. I invite questions, critiques, additions, and comments.


Sign up for our newsletter

We’ll let you know about our new content, and curate the best new evaluation resources from around the web!


We respect your privacy.

Thank you!


 

Written by cplysy · Categorized: evalacademy

Aug 12 2020

Innovation Design Quality Control

You want and need help in transforming your organization or business line and are seeking a consultant to help you. What should you look for? Let’s look at questions and issues you may want to consider when starting an innovation journey.

We break it down into three (plus) areas: Design research and foresight, service development, and evaluation.

Design Research

Design research is about exploring the problem or circumstance that you’re looking to intervene in through introducing a new product, service offering, or policy (which we’ll refer to as an innovation).

Design research is much more than ‘doing your homework’ and is meant to work with any marketing and financial studies you may have done. Design research is about exploring your end-user(s) — both identified and potential additional users. Responsible design research is also about looking at who else your innovation affects.

It will incorporate systems thinking into the process by considering the various ways in which your innovation affects and is affected by the various interconnections around it. For example, your service might be tied to other things (e.g., supply chain, regulatory issues, community norms) and good design research will help articulate these and allow you to map and model systems using visual tools.

Your innovation design team should have skills in design and research and understand a variety of methods and approaches such as quantitative analysis, qualitative data collection, sensemaking (for innovations dealing with complex situations), and behavioural science. The last point — behavioural science – is what allows you to understand what, why, and how an individual or group will choose to engage with your innovation and serves as a foundation for the next stage of work: service development.

But first, let’s go a little ahead into the future to look at the other part of design research: foresight.

Foresight

Strategic foresight is an approach to research that looks at the trends and drivers that influence specific domains of interest like your market, community, or social life as a whole. It draws on a variety of data sources such as published reports, publicly available (or privately held — if you have access) databases, as well as a series of exercises and activities that allow you and other stakeholders to envision what possible futures might look like.

The UK Social innovation agency Nesta has a useful, accessible primer on some of the methods that are used to envision futures.

Future-thinking is important because your innovation will always be applied to tomorrow, not today. Sustainable, effective innovations are those that meet emerging needs not just present ones. Foresight considers how and why things might change and, when combined with strategy and behavioural science, allows you to shape the design of your innovation to better anticipate and (hopefully) meet those changes as they emerge.

Service Development

Service development can include everything from exploring the physical space where your innovation will be deployed to undertaking usability research on digital platforms. The range of practices associated with what is more commonly called service design are many and when enlisting support to design your innovation it’s critical to ensure you have the right talent.

Service design often seeks to develop models of your intended users based on the design research you’ve undertaken. This can result in tools such as personas that provide evidence-informed caricatures of your users that you can use to develop and test scenarios.

Service design methods incorporate visual thinking methods and tools and design thinking by exploring the research, developing ideas, testing and trying these ideas out in ways that inform strategy, and then deploying them into the world. Having a design team with skills in design methods, facilitation, and visual presentation will make this much easier.

Visuals can include everything from simple (but illustrative) maps like the image above to more sophisticated visual models, ‘gigamaps‘, and storyboards.

Evaluation

Last and certainly not least is evaluation. It’s one thing to design an innovation, it’s another to know whether it does what you think it does. Evaluation allows us to assess what kind of impact our innovation has on the world, what processes lead to that impact, and what aspects of our service, product, or policy are most likely influencing this impact.

It is through evaluation of our innovation that we are better able to fine-tune, amplify, or retract our offering to ensure it’s creating the most benefit and not doing harm. Evaluation also allows us to understand what hidden value our innovation might be offering, to articulate your return on investment (ROI), and to widen your perception of what your innovation does and could do.

Bringing in design firms that do not build in professional-grade evaluation to the project is like doing half the work. What good is your new product or service if you have little idea how or whether it works in the real world over time?

These are some of the things that anyone looking to develop an innovation in-house or with a consultant team needs to consider. We have a lot of resources on our learning page on some of these methods and tools as well as overall approaches to supporting groups in asking better questions prior to engaging a contractor.

This is what we do. If you want help with any of this and doing good, quality service design, design research, evaluation and foresight, please reach out and contact us. We’d love to hear from you.

Written by cplysy · Categorized: cameronnorman

Aug 12 2020

Pancreas Ponderings: What T1D Has Taught Me About Eval

Long before we entered the pre-Covid/during-Covid realm, marked by daily monitoring of case counts, testing, hospitalizations, and death tolls, I had my own pre/during world. Like evaluation, this world includes constant monitoring, learning, and adjusting. This post shares five lessons from T1D that are relevant to evaluation.

Written by cplysy · Categorized: elizabethgrim

Aug 10 2020

My Interviewee is Drinking Vodka: An Evaluation Ethics Case

 

On a summer morning, after several attempts to interview clients for an evaluation project, I arrived with a social worker at an overnight shelter. Finally, we had located Jules, who wanted to share her experiences with the program I was learning about. When we approached her and her friends, we noticed that she was sipping from a bottle of vodka. 

Now this is certainly not an everyday occurrence in my life as an evaluator. Most of my days involve evaluation planning, liaising with stakeholders, and polishing up reports. The majority of my firm’s primary data collection is done by other members of my team, although occasionally I can’t keep my curiosity in check and I conduct a few interviews myself. For this project, I wanted to be closely connected to the program beneficiaries and ensure I had a very detailed understanding of their experience.  

This project, to me, was a very big deal. 

I had heard about it well before we were contracted to explore its impact on clients; I knew that it was incredibly innovative, that it was doing hard work that appeared vitally needed, and that I wanted to know more. It was without question that I wanted to claim the role of interviewer for myself. My academic background is in anthropology. While I knew that a life in academia was not in my future, I have been grateful for my training in social science methods. What I was doing in this project was close to participant ethnography and it rang true to me. Though short-term, as so many evaluation contracts are, I spent several days with a social worker trying to make meaningful contact with participants in this program.  

The initiative we were evaluating was designed to support the most vulnerable residents of our city. Participants were invited to the program because they were found to be the most intensive users of publicly available supports. They all had very frequent interactions with police and the courts, relied regularly on overnight shelters and other housing supports, and were heavy users of emergency departments and emergency transport services. All had experienced a great deal of trauma in their lives and were living with addictions and mental health concerns, along with various severe physical ailments.  

City%2Bpolice

All of this background is leading up to a question of professional ethics — but a few more details first.  

While we had relatively unprecedented access to various quantitative datasets – thanks to stakeholders determined to collaborate and establish the right consent and information sharing agreements – this evaluation would have been incomplete without investigating the lived experience of the clients. The quantitative data, compelling though it would prove, would not speak for itself. To really tell the story of this initiative, we needed to understand the clients’ journeys through services (or lack thereof) and how this initiative was different. 

Jules’ social worker shared some of what she knew about her client. Jules had lived a frankly horrifying childhood in another province before moving to our city with her boyfriend; this boyfriend was also her abuser and her pimp. She was still young, in her early twenties, addicted to alcohol, spending nights in shelters, often beaten by people she considered friends, and yet still believing that she would be loved. Animal therapy was helpful for her, and she hoped to have children someday.  

In the previous three years, Jules had been transported by ambulance 53 times. She had visited the emergency department 81 times; 11 of those visits were for head injuries. Her hospital admissions – times where she stayed overnight on an inpatient unit – were for reasons including pneumonia, a leg ulcer, pancreatitis and alcohol withdrawal. In one year, she had critical levels of ethanol in her blood eight times. 

If it is not already clear, alcohol was an integral part of Jules’ life. 

When planning our interviews, with the project team’s advice, we aimed to schedule times that would work best for clients who may be actively using intoxicants. For most, that ideal time was between 9 am and noon. Most of our interviews were scheduled at that time, although few were conducted smoothly. One client didn’t come home the night before and the on-site staff at his permanent support house couldn’t locate him. Another was able to share some of his experiences with me but became overwhelmed and we stopped the interview; his social worker spent more time with him after I left. Yet another client needed emergency care when we arrived at his apartment. 

I should note that social workers were very involved in developing our consent and interview protocol. We did not invite any clients who had guardianship orders in place, or those who the social workers felt were not cognitively or emotionally capable of participating. Social workers discussed the evaluation and the interviews with clients before scheduling, and we reviewed information about the nature of the project, the risks and benefits of participating, before obtaining informed consent. Mental health support was always immediately available if it became necessary. 

The “sober window” we were aiming for simply didn’t exist for Jules.

At this point in her life, she was drinking alcohol continuously, and substituting hand sanitizer or mouthwash when necessary. She regularly stole alcohol and products containing alcohol, leading to regular interactions with police and several warrants. At night, she was sleeping at a shelter; in the mornings, she would vomit repeatedly before starting to drink again. Without alcohol, she could not function. 

So, when I met Jules at 9:30 on a summer morning, prepared with my recorder and questions, Jules was sipping vodka on a sidewalk surrounded by friends. We were both faced with decisions. Jules needed to decide if she wanted to speak with me, and where. I needed to decide if interviewing a client who was actively drinking alcohol was ethical. 

I’ve shared this story with participants in project ethics training courses, and at an evaluation conference. People feel very strongly about their stance on whether I made the right decision. I was rapidly running through a mental decision tree.  

Perhaps the most obvious question: can an intoxicated person give informed consent? In general, I would say no. If I was interviewing middle managers about their experiences of a workplace mentoring program and one of them was drinking or using other drugs, I would very likely reschedule. But what about when a person’s most functional state requires alcohol? Without alcohol, Jules was violently ill and could not function. 

How could I be certain that Jules was participating voluntarily? She had previously told her social worker that she wanted to participate and was telling me that she still wanted to. But how confident could I be that she knew what she wanted? We were providing gift cards as honoraria – perhaps Jules was so desperate for that money that she would participate even though she didn’t really want to. 

If I chose not to interview Jules, who would tell her story?  

As an evaluator, I feel that one of our responsibilities is to give voice to people who may not have the opportunity or ability to share their stories.

You may have already guessed which path I chose. And you may be thinking that I am a clearly unethical evaluator and no researcher worth their degrees would be so cavalier. If that’s the case, I can’t blame you. Nowhere in my training did anyone suggest that I should pursue data collection with an intoxicated person. Although, to be fair, none of my early-2000s research training had really addressed this particular scenario. It had certainly provided me with examples of how researchers have exploited vulnerable people and done irreparable harm to them.  

But I would like to defend my decision to proceed with my interview. I did consider several ethical points: 

  • Voluntary participation: I truly believed that Jules wanted to talk to me. She had told her social worker so in previous days and repeated that desire when we met. At the end of the interview, she thanked me for speaking with her. The gift card, though helpful, was of a value less than what she could make through prostitution in the same time – I don’t think it was unduly coercive. 

  • Informed consent: Again, Jules heard about the intention of our evaluation project and how the results would be used on more than one occasion. I do believe that though she was drinking, she did provide informed consent. I confirmed this more than once verbally before proceeding. 

  • Respect for vulnerable persons: All the clients served in this initiative were highly vulnerable. By having Jules’ social worker at the interview, we ensured that any potential new disclosures could be followed up on, and that appropriate mental health supports could be immediately provided. This interview was one more opportunity for Jules to continue her relationship with her social worker, for whom she expressed gratitude many times during our conversation.   

  • Giving voice: This point, of course, was what tipped the scales. If I chose not to interview Jules that morning, her voice and her journey would not have been included in our evaluation. As I’ve stated above, there simply was not a point during the day when Jules was not either violently ill or drinking; these were her two states of being at the time. To say that her voice was not valid, that it was less valuable than a sober voice, would have been unethical. You may disagree with me, but I stand firm that Jules’ voice needed to be amplified.  

Had I not proceeded with the interview, I would not have heard her perspective on how much the program team had helped her. I would not have seen how dedicated her social worker was – our interview ended with Jules vomiting and her social worker calmly and compassionately cleaning up both her and the floor around her. I would not have heard about the severe trauma Jules experienced throughout her life. Understanding the deeply disturbing past that led to Jules, and other clients, being served by this initiative was fundamental to telling the story of its impact.  

“I’m a hooker. I’m a drunk and a junkie.”

That’s how Jules described herself to me. She was physically and sexually abused for as long as she could remember. She feared seeking police support when beaten because her street friends who hurt her were who she considered her family. She did not want permanent supportive housing because she was terrified of being alone, even though when she was with others she was regularly beaten, robbed – even urinated on in the days before our interview. She drank until she blacked out, and often became violent during those blackouts. She desperately missed her incarcerated boyfriend and hoped that when he was released, he would love her the way she loved him. She enjoyed pet therapy and was effusive with praise and love for her workers.   

Without that story, Jules’ collection of statistics looks frankly unimpressive. While her interactions with police services in the six months she had been receiving supports from this initiative decreased by 26%, her visits to the emergency department increased by 71% and her use of ambulance services more than doubled. Sharing Jules’ story puts context to those results. 

I haven’t been faced with such a challenging decision since this project. But when it happens again, I am confident that I will rely on both my training and inherent respect for human dignity to guide my choice. 


For more on the ethics that guide evaluators, here are a few resources: 

American Evaluation Society Guiding Principles 

Australian Evaluation Society Code of Ethics and Professional Practice 

Canadian Evaluation Society Ethics 

United Nations Evaluation Group Norms and Standards  


Sign up for our newsletter

We’ll let you know about our new content, and curate the best new evaluation resources from around the web!


We respect your privacy.

Thank you!


 

Written by cplysy · Categorized: evalacademy

Aug 07 2020

Evaluation Roundup – July 2020

 

Welcome to our July roundup of new and noteworthy evaluation news and resources – here is the latest.

Have something you’d like to see here? Tweet us @EvalAcademy or connect on LinkedIn!


New and Noteworthy — Reads


International Program for Development Evaluation Training – Evaluation Hackathon

For those of you on Twitter, you likely have been following IPDET’s Evaluation Hackathon. The Evaluation Hackathon took place from July 7-13 and was “a playground for creative individuals from around the world to unite their skills, knowledge and inspirations to find creative solutions to challenges of our times.” These solutions are ones that might help to empower the field of evaluation. Check out all the cool ideas on the project page.   

Capacity4dev – Evaluation in Crisis

Capacity4dev is the European Commission’s platform for sharing information related to International Cooperation and Development. Its Evaluation Support Service team created the DEVCO/ESS Evaluation in Crisis Initiative. This initiative curates resources (documents, webinars, videos, blogs and podcasts) to help evaluators evaluate in crisis. Some of the topics covered include: How do we need to adapt our processes to move quickly? What data collection techniques are best suited in a crisis situation? Do we need to review our evaluation ethics? How do we check facts when using remote techniques? Can we still contribute to sustainability and if so, how?

Eval Forward – Evaluation in Times of COVID-19

If you are looking for more insights about evaluation during times of crisis then check out Eval Forward’s three-part blog series that describes reflections from leaders and managers currently engaged in humanitarian-development evaluations. Evaluation leaders from Action Against Hunger and the World Food Programme were interviewed and asked to reflect on how the pandemic is affecting the practice of evaluation and what they think it will mean for evaluation going forward. Interestingly, some speculated on a greater mix of national and international evaluators on evaluation teams as a result of COVID-19 (check out Engage R+D’s report mentioned below for why this is so important in our field.)

Engage R + D – Listening for Change: Evaluators of Color Speak Out About Experiences with Foundations & Evaluation Firms

In Engage R+D’s Listening for Change learning brief they state, “foundation staff and evaluators tasked with planning and assessing social change efforts do not reflect the demographics and cultures of the communities they serve” – there needs to be more attention to diversity, equity and inclusion (DEI). To make progress when it comes to DEI, we need to start by listening to ideas, insights and experiences of professionals of color. The learning brief reports on four key themes on what it will take to support leaders of color in philanthropic evaluation: 1) Outreach is key to opening a career pathway; 2) Attitudes and dynamics in the workplace affect retention of evaluators of color; 3) Demonstrated commitment to DEI attracts evaluators of color to evaluation firms and clients, and; 4) Employers have an active role to play in retaining staff.

Stanford Social Innovation Review – Ten Reasons Not to Measure Impact and What to do Instead

While we’re talking about ways to transform the evaluation field, let’s talk more about impact evaluation. There is a continued push for more and more impact measurement; however, this is not always appropriate and even problematic in a lot of circumstances. In this article, Mary Kay Gugerty and Dean Karlan outline the ten reasons or circumstances not to measure impact and the alternatives that can be adopted instead. Ultimately these reasons fall into four categories: 1) Not the right tool; 2) Not now; 3) Not feasible, and; 4) Not worth it.


New and Noteworthy — Tools


EvaluATE – Key Resource by Evaluation Topic

EvaluATE has many resources on its site; however, we all know clicking through and navigating to numerous resources can quickly lead you wondering where you are and how you got there. Instead, EvaluATE has compiled its resources into one PowerPoint file, organized according to evaluation topic areas, so you can quickly navigate to the resources you need. Topic areas include Finding and Selecting an Evaluator, Integrating Evaluation in Proposals, Getting Started with Evaluation, Evaluation Design, Data Collection and Analysis, and Reporting and Use.

Inspiring Impact – Review your existing data worksheet

Inspiring Impact created a worksheet that outlines a step-by-step process to help review data. The worksheet is helpful in determining what information you should continue to collect, what to stop collecting and what to start collecting. The worksheet is available in both Word and Excel formats.

Khulisa Management Services – Visual Methodologies in Evaluations

My favourite part of being an evaluator is when I can combine my analytical and creative sides – (so much so I coined the term Evalucreator!) For all you Evalucreators out there, check out this deck on visual methodologies and how to incorporate them into your evaluation practice.

DC Fiscal Policy Institute – Style Guide for Inclusive Language

We’ve mentioned DEI above in our New and Noteworthy reads. There are steps you can take for greater DEI when you write. This style guide provides guidelines for ways that we can employ inclusive language and integrate a racial equity lens in our writing. While the guide is targeted for the DC geography, it still provides useful terms and principles that can be applied in different settings.


New and Noteworthy — Courses, Events and Webinars


August 2020

Claremont Graduate University – The Evaluator’s InstituteA variety of courses that are being conducted by various instructors, including some big names like Michael Quinn Patton, Ann K. Emery, and Ann Doucette.

Australian Evaluation Society – Fundamentals of Good Evaluation Reporting and PracticeFacilitator: Anne Markiewicz

Date and Time: August 17 & August 24; 9:30am – 11:00am AEST

Venue: Online 

September 2020

Evalpalooza I: Evaluation Failures with Kylie Hutchinson and Thought Leaders

Presenters: Kyle Hutchinson & Libby Smith

Date and Time: September 24; 12pm CDT

Venue: Online


We have a free guide:

Program Evaluation Scoping Guide

This is a free digital download. The guide outlines questions evaluators can ask program managers or other stakeholders to better understand the scope of the program and its evaluation. The questions in the guide are intended to help evaluators begin formulating a quote and/or an evaluation plan; however, it can also be used identify disagreements or gaps in what is known about the program and/or the boundaries of the evaluation.



 

Written by cplysy · Categorized: evalacademy

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 245
  • Go to page 246
  • Go to page 247
  • Go to page 248
  • Go to page 249
  • Interim pages omitted …
  • Go to page 310
  • Go to Next Page »

Footer

Follow our Work

The easiest way to stay connected to our work is to join our newsletter. You’ll get updates on projects, learn about new events, and hear stories from those evaluators whom the field continues to actively exclude and erase.

Get Updates

Want to take further action or join a pod? Click here to learn more.

Copyright © 2026 · The May 13 Group · Log in

en English
af Afrikaanssq Shqipam አማርኛar العربيةhy Հայերենaz Azərbaycan dilieu Euskarabe Беларуская моваbn বাংলাbs Bosanskibg Българскиca Catalàceb Cebuanony Chichewazh-CN 简体中文zh-TW 繁體中文co Corsuhr Hrvatskics Čeština‎da Dansknl Nederlandsen Englisheo Esperantoet Eestitl Filipinofi Suomifr Françaisfy Fryskgl Galegoka ქართულიde Deutschel Ελληνικάgu ગુજરાતીht Kreyol ayisyenha Harshen Hausahaw Ōlelo Hawaiʻiiw עִבְרִיתhi हिन्दीhmn Hmonghu Magyaris Íslenskaig Igboid Bahasa Indonesiaga Gaeilgeit Italianoja 日本語jw Basa Jawakn ಕನ್ನಡkk Қазақ тіліkm ភាសាខ្មែរko 한국어ku كوردی‎ky Кыргызчаlo ພາສາລາວla Latinlv Latviešu valodalt Lietuvių kalbalb Lëtzebuergeschmk Македонски јазикmg Malagasyms Bahasa Melayuml മലയാളംmt Maltesemi Te Reo Māorimr मराठीmn Монголmy ဗမာစာne नेपालीno Norsk bokmålps پښتوfa فارسیpl Polskipt Portuguêspa ਪੰਜਾਬੀro Românăru Русскийsm Samoangd Gàidhligsr Српски језикst Sesothosn Shonasd سنڌيsi සිංහලsk Slovenčinasl Slovenščinaso Afsoomaalies Españolsu Basa Sundasw Kiswahilisv Svenskatg Тоҷикӣta தமிழ்te తెలుగుth ไทยtr Türkçeuk Українськаur اردوuz O‘zbekchavi Tiếng Việtcy Cymraegxh isiXhosayi יידישyo Yorùbázu Zulu