• Skip to main content
  • Skip to footer
  • Home

The May 13 Group

the next day for evaluation

  • Get Involved
  • Our Work
  • About Us
You are here: Home / Archives for allblogs

allblogs

Jan 16 2022

Stop confusing being data driven with doing data driven

If I see one well-intended mistake my customers make over and over again, it’s that they confuse “doing data” — having a flurry of data-related activities — with BEING data driven. The organizations that first take the time to address the culture — or being — part of the process are the ones that ultimately see the biggest shift towards a culture that thrives with open, challenging conversations with data.

To be or to do?

First, let’s get clear on the difference between doing and being.

Doing involves actions: discrete, tangible, work. In a sense, it is product-focused.

Being involves thoughts and beliefs: everything underneath how we operate.

My clients find doing/actions easier to describe, measure and achieve. Who doesn’t love a good checklist? And while they initially find being/beliefs harder to see and describe, but they ultimately feel far more capable of navigating organizational and workflow shifts once they dig into to the being.

Fellow coach Alex Carabi describes this dichotomy like an iceberg — often leaders/managers instinctively focus on the doing, the part they can easily see above the water; but those changes are only temporary. The real change, the kind of change that dictates whether an organization will fail or succeed, is far below the water line in the space where the collective beliefs of the organization are held.

Debunking your excuses for not being data driven

I hear a lot of reasons excuses for why organizations struggle with being data driven. Let’s do a quick reality check so we can talk more about why being data driven is about your organizational health.

· “We need better tools.” I know some amazing organizations that use Microsoft Excel or Google Sheets as their platform. They have cultivated a desire to create and use information from them, which is far more powerful than the tool itself.

· “I am not a data person.” We are all data people when the data is meaningful: when most of us prepare for a trip away from home, we check the weather report and pack accordingly. All of us can use a dashboard and make strategic decisions with it, and weather reports are dashboards representing complex data and analysis. This mindset shift is crucial: we are all data people; your organization already “does” data. You have to decide collectively what data will be meaningful to your work and commit to using it as fundamental to your organization’s health.

· “We have too many competing priorities” or “We don’t have time for this.” What self-limiting narrative are you telling yourself to restrict information that would help your organization serve people better? What is getting in between you and greatness? What if data was a crucial mineral in the water that nourishes your organization?

· “We have a solid command of our data and who we serve.” Often this is a saboteur or guard rail against letting information into the system that would challenge preconceived notions, inhibit willingness to innovate, and/or mask a desire to confront information that something you do isn’t working. Your experience of your clients may be supported by the data, but more often you are missing part of the story. And why wouldn’t you want the whole picture?

DOING data-driven relies on actions,

BEING data-driven requires culture change

My favorite poster, inherited from my grandfather…. Reinterpreted as the dance of doing & being: Do, Be, Do, Be, DO. photo credit: me.

Being and doing are partners. If you want to create a more data-driven culture you won’t treat them as either/or. Many organizations focus on improvements at the doing part — such as building logic models and dashboards — but then relegate those tools to a specific person or department who may not have the influence to ensure that the data gets used when critical decisions are made.

BEING data driven requires that the constellation of people recognize that data… and work together to integrate it into the organization’s culture and all its decision-making.

If your organizational system was described like your health, you couldn’t define it by any one organ or health metric. Rather, health requires each of those things to work both independently and as part of the greater whole.

And the hardest part: you already knew this

Systems-level work isn’t particularly new or even revolutionary. You know some flavor of it by another framework: team building has been around for ages, and efforts around diversity, equity, inclusion and belonging are deeply rooted in systemic efforts.

When we sit down and piece through where your organization struggles, maybe do an elucidating Five Whys exercise, you might exhale with a knowing sigh of someone who was trying to fix a symptom of a deeper underlying problem. But that’s also the good news. Once you bring these issues to light, you can do something about them. And get on with the power of BEING data driven.

Written by cplysy · Categorized: betsyblock

Jan 14 2022

Evaluation Facilitation Series: Facilitation Activity #1 (Making Metaphors)

Evaluators need to be many things; a good facilitator is high up on the list. We facilitate all sorts of conversations throughout the evaluation process (or at least we should!).

Sometimes just posing questions to a group and having a discussion works, but other times we may need to put in a little more effort to elicit the rich discussion and insights we are looking for.

After all, evaluation may get you jazzed up, but that probably is not the case for most people! To make sure stakeholders are engaged and get just as excited about evaluation as us, we need to use all the tools we have to support evaluation use.

Earlier in my career, I found myself searching the Internet for facilitated activity ideas I could apply in my evaluations. In 2014, I came across Jean A. King and Laurie Stevahn’s interactive evaluation practice workshop that was being offered at the American Evaluation Association conference.

In this workshop, they introduced interactive strategies that could be applied across various evaluation aims, approaches, and contexts. Jean and Laurie based this workshop on their book called Interactive Evaluation Practice: Mastering the Interpersonal Dynamics of Program Evaluation. I have revisited this book many times since that workshop when I need some inspiration for facilitation activities.

This evaluation facilitation series will highlight some of these facilitation activities and how I have applied them over the years. This article focuses on their “Making Metaphors” activity, along with some step-by-step instructions you can use to try it in your own evaluations.


Making Metaphors

When to use

Metaphors and similes are effective at taking an abstract or hard-to-understand concept and comparing it to a simple, concrete concept. As a result, good metaphors are effective at helping people understand something they otherwise might not have (check out this article where we compared writing an evaluation report to cooking). I like using the Making Metaphors activity to: 

  • Assist people with important or difficult conversations 

  • Integrate views and develop a shared understanding on a topic 

  • Articulate what is known to the group 

  • Spark humour and playfulness 

In particular, I like to use this activity when I start an evaluation. At the start of an evaluation, I want to understand the ideas, perceptions, and experiences those I am working with have around evaluation so I can adjust my approach accordingly. Sometimes I incorporate the Making Metaphors activity in the evaluation kick-off meeting when people introduce themselves (see our What conversations do you need to have at the start of an evaluation? article for more kick-off meeting ideas). As part of their introduction, I will ask them to think about the metaphor or simile (e.g., “Evaluation is like…”) and then ask them to complete it.  

This activity is also good to use when completing a logic model or theory of change with a group and you need them to clearly articulate what success looks like.  

The following outlines directions for the activity, along with an example and tips to consider. 


Directions 

Group Size 

This activity can work with any group size, but works best if you break participants into groups of 2 – 5 people. 

Time Required 

5 -10 minutes per metaphor 

 

Instructions 

  1. Present an open-ended metaphor or simile (using the word like) stems. For example: 

    • “My previous evaluation experiences have been like…” 

    • “Evaluation is like…” 

    • “A successful evaluation looks like…” 

    • “Success of the program looks like…” 

  2. Ask participants to complete the metaphor or simile. The quickest and simplest way is to ask participants to respond using words. However, you can also ask participants to create a response by selecting an object (see Tips below). If you have the time, I would suggest trialing the object over the words, because words can sometimes be constraining and using an object depends less on verbal acuity. 

  3. (If an object is selected) Give participants time to find an object until they find one that relates to and completes the metaphor or simile. Connections to the object can be anything – literal, rational, emotional, intuitive, symbolic etc. 

  4. (If an object is selected) Ask participants to take a few minutes to 1) write down the details of the object and then, 2) the connection they made between the object and the metaphor. 

  5. Ask participants to share back the word or details of the object and how it relates to the metaphor. 

  6. In groups, identify patterns, similar qualities, and/or unique outliers across the responses. 

  7. Have each group share back their new insights with the larger group and what it means going forward. 

 

Example script for “Evaluation is like…” metaphor stem: 

  1. I want you to look around the room for an object or something you feel represents what you think of when you hear the term “evaluation.” 

  2. Write a few notes about the object. Describe it in detail first and then describe how it relates to evaluation.  

  3. In your group, share back why you chose that object. Take time to describe the object and its connection to the following simile: “Evaluation is like…” 

  4. After you have all shared, have a discussion in your group about any patterns, similarities, and differences in responses.  

  5. After your discussion, have your group identify one idea or suggestion for how to make this the best evaluation experience for the group. 


Tips 

Objects can be random items you collect from around your house. Store the collection of items in a plastic box, tote bag, or other easily transportable container. 

You can also ask participants to select items in their pockets, wallets, handbags, briefcases, or things they are wearing or from items they see in the room (selecting from a room works best with virtual meetings). 

Instead of objects, you can also use sets of visual images (e.g., pictures, art cards, photos, postcards, magazine illustrations, etc.). At My Best are a set of cards that contain words on one side and a picture on the other. I have used these in workshops and they are more compact to carry than a box of random items!  

Display sets of visual images or tangible objects on a designated table in the room. Participants gather around the table to look at the items and select one to complete the metaphor. You can line up the images or scatter – doesn’t matter!

Each set of words, images, or objects should contain at least 10 to 15 items more than the total number of people participating (e.g., if 25 people participate, then the set should contain about 35–40 items).


If you’re needing to infuse a bit more creativity into your evaluation practice then try out the Making Metaphors activity.

If you haven’t already, subscribe to our newsletter for more facilitation inspiration!


Sign up for our newsletter

We’ll let you know about our new content, and curate the best new evaluation resources from around the web!


We respect your privacy.

Thank you!

 

Written by cplysy · Categorized: evalacademy

Jan 14 2022

Evaluation Question Examples by Type of Evaluation

We’ve defined and written about How to Write Good Evaluation Questions before. We’ve even shared some examples of evaluation questions based on the sector or content area. But there is another way to think about evaluation questions. Let’s look at how using different evaluation strategies or frameworks can help you to craft those perfect evaluation questions.

To describe how evaluation questions differ based on type of evaluation, let’s make up a hypothetical program to evaluate.

You’ve been hired to evaluate a program that has been operating for one year. The program aims to engage youth in an after-school program with the goal of keeping at-risk youth safe. They offer physical activity programs (e.g., basketball), tutoring, and life-skills programs (e.g., cooking classes), as well as lounge areas.


Scenario 1: Formative Evaluation 

The purpose of formative evaluation is to assess how the program is being implemented. Key evaluation questions should focus on enablers and barriers of implementation. For example: 

Awareness 

  • Are youth aware of the program?  

  • Are families aware of the program? 

Utilization 

  • What are program utilization rates? 

  • Why are youth attending, or not? 

  • What barriers exist in attending various programs? 

  • What parts of the program are most desirable? 

Implementation 

  • What is working well?  

  • What could be improved? 

I’ve found that looking to the field of implementation science can help to outline themes and even specific questions that might be relevant to formative evaluation. 


Scenario 2: Summative Evaluation 

After a year of operations, the program leaders believe it is time to assess the impact of the program. Your summative key evaluation questions should focus on outcomes: 

Utilization 

  • Who used the program? What is the profile of the youth who attended our program? 

Outcomes 

  • Did more youth graduate high school? Did fewer youth drop out of school? 

  • Did youth experience fewer disciplinary actions at school? In the community? 

  • Did youth learn new life skills? Do those life skills support them in gaining employment? 


Scenario 3: Using the RE-AIM framework

First a quick note: most of these scenarios are not mutually exclusive. RE-AIM is well suited to both formative and summative evaluation. For an overview of RE-AIM read this. RE-AIM essentially comes with its own questions, that only need to be adapted to your specific program.  The key evaluation questions are based on the 5 focus areas of RE-AIM: 

  • Reach – Who is coming to the program? Who is not? 

  • Effectiveness – Is the program engaging at-risk youth? Are more youth finishing high school? 

  • Adoption – How does the program collaborate and integrate within the community? What partnerships enable the program’s success? How is awareness being spread? 

  • Implementation – How is the program being implemented? Is it implemented as intended? What barriers exist? 

  • Maintenance – Are the outcomes sustained over time? 


Scenario 4: Developmental Evaluation 

The program is interested in really understanding how they can be adaptable to the current context and how they can continue to develop and grow to have a sustainable impact on the complexity presented in at-risk youth. In the last year they haven’t yet landed on a stable program – constantly adapting to new information and changing environments. They want to use Developmental Evaluation to support rapid growth and emergent innovations. Key evaluation questions should focus on emergent learning and opportunities for development: 

  • What are we learning that informs our development? 

  • How are we engaging the community? 

  • What evidence of effectiveness is useful to our development? 

  • What opportunities are emerging? 

For more great examples of DE evaluation questions, check out Developmental Evaluation Exemplars: Principles in Practice edited by Michael Quinn Patton, Kate McKegg and Nan Wehipeihana. 


Scenario 5: Utilization-focused evaluation 

Again, Utilization-focused evaluation is not at all mutually exclusive from previously described scenarios. In this scenario, you determine that the primary purpose of evaluation is to give the staff something they can action (or use). In this case, the approach to development of the evaluation questions is key: engage your stakeholders often, early, and deliberately. What do they need to know? What decisions do they need to make? Key evaluation questions may include: 

  • How can the program attract more at-risk youth? 

  • How can the program retain/continue to engage at-risk youth? 

  • Which programs have the most demand? 

  • Which programs have the best return on investment? 


Scenario 6: Most Significant Change Evaluation 

Most Significant Change evaluation moves away from standard indicators and measuring. It uses storytelling to evaluate the success of a program through the lens of various stakeholders. For this evaluative approach questions may be tailored to each audience: 

For program staff: 

  • What is the biggest success you have seen? 

For youth: 

  • What is the biggest change you have experienced since participating in this program? 


Scenario 7: Outcome Harvesting 

Outcome Harvesting is a participatory evaluation methodology. Instead of key evaluation questions, Outcome Harvesting focuses on Outcome Descriptions. You may ask questions of the stakeholders like: 

  • What have you worked on? (or participated in?) 

  • What was the significance of the work you did? 

  • What impact do you think it had? 


Scenario 8: Principles-Focused Evaluation 

Principles-focussed evaluation aims to assess how an applied set of principles translated into action or behaviour. Your methodologies may be very similar to other evaluation approaches, but the questions may centre on a pre-defined list of principles. Let’s say the program had a list of core values that included: 

  • Person-centred supports. The program aims to offer support to each youth, based on individual, specific needs. 

  • Welcoming. The program has no exclusionary criteria. 

  • Fun! The program encourages staff and youth to have fun and try new things. 

Your evaluation questions may then look like: 

  • Do youth feel supported? 

  • What barriers exist to accessing the program? 

  • What is the experience of participating? 

Again, Michael Quinn Patton has an entire guide to Principles-Focussed Evaluation that can help you develop the right evaluation questions. 


Hopefully, this has helped to showcase how some evaluation approaches and frameworks can shape your evaluation questions.

Program evaluation is certainly not limited to one dominant question. Most evaluations can assess several domains. The rule of thumb is 3 – 5 key evaluation questions, with nuanced sub-questions embedded if needed.

Check out our checklist that will help to ensure you’ve considered some important details in crafting your evaluation questions.


Sign up for our newsletter

We’ll let you know about our new content, and curate the best new evaluation resources from around the web!


We respect your privacy.

Thank you!

 

Written by cplysy · Categorized: evalacademy

Jan 14 2022

Book Review: Developmental Evaluation by Michael Quinn Patton

I recently finished reading Michael Quinn Patton’s Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use (2011) cover-to-cover. I am working on a project using Developmental Evaluation (DE) and felt like I wanted to take a deeper dive into my understanding.

I decided to read it slowly, taking lots of notes, using my highlighter (until it ran out of fluid!), and writing notes to myself in the margins. I read only a handful of pages a day, and sometimes only a handful a week. It took me just over 3 months to finish, but I wasn’t in any rush. 

Patton has a conversational style of writing. It’s casual, full of puns, jokes, tangents, and many, many stories. It’s no surprise, then, that his teaching style is by example. He doesn’t so much tell you how to do DE as he shares real-life stories and examples. He’s offering you the privilege of experiencing his experiences through his storytelling. The book is not light on examples to draw from.

Though he tries his best to lay out how the narrative showcases DE, it’s tricky. DE is many different things depending on your context. In fact, I struggled to even find a concise definition of DE! Briefly, DE is about “bringing evaluative thinking and data to bear, as a project is conceptualized and developed.” (page 3); “DE is designed …to nurture development, emergent, innovation and transformative processes.” (page 7). 

If he’s not teaching through example, he’s teaching through repetition. Which, let’s admit it, is annoying when you’re in it, but super helpful in the long run. It makes those key messages sticky! He has dedicated ample pages to helping the reader identify situations in which DE may apply and explaining what about those situations make them a good fit for DE. 

A quick rundown of the book goes like this: early chapters are dedicated to “What is DE?” and the role of the evaluator. It then moves into identifying situations to use DE – it doesn’t apply to every evaluation, so how does one know?

The book takes a bit of a detour (in my opinion) in Chapter 5 to talk at length about systems thinking, before going back to how DE applies to the lifecycle of complex programs. It’s not until Chapter 8 that we start to get a little more practical about how to do DE. But note that I say “a little.” One of the key points of the book is that there is no one way or one right way to do DE. It’s not about the methods or the approach so much as the principles and desire to develop that are foundational to DE. However, in Chapter 8, he does offer a practical list of inquiry frameworks that can be used in DE.

Chapter 9: “Bricolage” (meaning “something constructed or created from a diverse range of available things”) is like the junk drawer of DE information. It’s full of “other things to know about DE” before the book is wrapped up in Chapter 10: an excellent conclusion, summarizing places to use DE, core characteristics of DE and necessary characteristics of the evaluator. 

If you don’t have the time to read this text cover-to-cover, I might suggest Chapters 1 and 2 for a comprehensive overview, and Chapters 8, maybe 9, and 10 for more practical application tips. 

The tricky thing about DE is that one can’t really explain how to do it. It is not, itself, a methodology. Reading this book and hoping to gain a step-by-step how-to is a misstep. This book explains why DE is a necessary tool in the evaluator tool kit – why formative and summative evaluation are sometimes insufficient. The storytelling throughout the book is the point. DE is about going on a journey to arrive in a place that maybe you’d thought of, but likely didn’t predict. I found reading the book slowly to be helpful. Instead of blazing past the stories, I allowed myself time to think about them, to reflect, and I think that helped me to understand DE just a little bit more. 

It was great to have a project that I’m actively working on to apply some of the strategies, to shape my thinking, and to reinforce key messaging to my clients using Patton’s own words. “I need to be included in strategic planning sessions and discussions” and “my job is to provide you with useful information in as-close-to-possible real-time, and for you to use that information. Let me facilitate a discussion with the team about what this means for you” have become my regular catchphrases. 

I think the intended audience for the book is the evaluator: the user of DE, but it could also be a valuable read for those interested in social change, and those who don’t know how evaluation can help in their innovative, unplanned, complex situation. 


Have you read it? Let me know your thoughts! If you haven’t read it and you may use DE, I think this book is worth your time. 

For more on DE, check out our Six Lessons from Practicing DE.


Sign up for our newsletter

We’ll let you know about our new content, and curate the best new evaluation resources from around the web!


We respect your privacy.

Thank you!

 

Written by cplysy · Categorized: evalacademy

Jan 14 2022

Grab the cake, it’s time for a data party! Benefits of and how to run your own

So you’ve successfully gathered the data you need to evaluate your program. But how do you engage stakeholders and partners to ensure a thorough understanding of the results? A data party could be part of the answer!


What is a data party?

A data party is a gathering that allows stakeholders to increase their understanding of findings and provide input into data sense-making. A data party is a process of participatory data analysis with program stakeholders. During a data party, stakeholders come together to interact with and interpret the data and provide input into final conclusions and recommendations. This process often leads to different views and perspectives of the results to be discussed.  


Why host a data party?

A data party promotes a culture of participation and collaborative data interpretation. Often, evaluators collect, analyze, and interpret evaluation data with minimal program staff and program service recipients’ involvement, which can lead to gaps in the interpretation and a missed opportunity to gain their insights on the main findings. A data party addresses this gap and involves program staff and service recipients in interpretation and sense-making.  

One way of increasing stakeholder and community participation is through collaborative data interpretation. In program evaluation, it is difficult to implement a true participatory method as most projects have a limited timeline and budget for evaluation. A data party provides an opportunity for engagement to groups that are often left out of discussions.  

Engaging stakeholders in data interpretation enhances the acceptance of the evaluation findings and recommendations. It also provides context and expertise that the evaluator may be missing and helps to ensure the evaluator’s interpretation and resulting recommendations are appropriate and feasible. A data party creates a platform to combine specific data points with personal experience and helps to better explain challenges in programs (e.g., where and why programs are falling short). The discussion during the event empowers stakeholders, provides a learning opportunity, and enhances engagement.  


How to successfully throw a data party

Like all other parties, each data party is unique. There are many ways to organize a successful data party depending on the project context, the type of available data and the stakeholder groups. We’ve included a few points below to get you started.

1. Purpose

Clearly stating the objectives of the data party will shape the event and will make planning easier. Identifying the purpose will determine the content of data presented, and the discussion questions.  

2. Invitation

Identify the different stakeholder groups you want to involve and the number of participants from each group. Having a clearly stated purpose can support this. Offer support so that all stakeholder groups, including program service recipients, can attend (e.g., cover travel costs, and if necessary, offer translation services, etc.)  

 3. Venue

Depending on what’s convenient within the project context, it can be organized in person or online. If it is virtual, provide the dataset or summaries in advance to ensure all participants have access.  

4. Timing

The ideal time to organize a data party is after you have collected and analyzed all the evaluation data and before you draft the final report. When scheduling, consider the availability of all participants.  

A data party can take several hours depending on the complexity and size of the evaluation. Since understanding and sense making of the data takes a bit of time, it is important to allocate sufficient time to give participants a chance to review the data completely and get the dialogue flowing across diverse perspectives. In a small project, a data party may be just 1-2 hours. For more complex projects with lots of data, your party may take 3 hours or more. 

5. Mix it up

If you have a large group of participants (more than 8), use break-out rooms to organize them into smaller sub-groups (4-5). Mix stakeholder types in each sub-group to promote the exchange of different perspectives. Have an evaluation team member facilitate the discussion and take notes for each sub-group.    

6. Data

Including the right data is critical for the success of a data party, so select your content carefully while considering the purpose of the event (e.g., data that needs verifying, outliers, etc.). Provide accessible information and prepare the findings in a way that is easily understood by all stakeholder groups for meaningful participation. Use various approaches to share the main findings to keep participants engaged (e.g., posters – where participants walk around the room in groups and look at data; data placemat – a document showing quantitative and qualitative data using visuals, graphs, word clouds etc.).  

7. Probe

If stakeholders disagree, probe and inquire to gather as much context to clarify how they have understood the data and where they are coming from. The purpose is to co-create meaning and explore new ways of looking at things, not to gather support for existing interpretations.  

8. Discussion Questions

Prepare questions in advance and facilitate the discussion within each sub-group. Sample discussion questions include:  

  • What is the data telling you about (insert topic)? 

  • What stands out for you? Are there any surprises? 

  • What would you be interested to explore and/or discuss further?  

  • What is missing in the data that you thought you would see?  

  • What actions would you take as a result of these findings? 

If you have draft recommendations as a result of your analysis you would like to discuss with stakeholders, consider the following questions:  

  • What response do you think is required here?  

  • How viable are these recommendations? 

  • Which feel most doable? 

  • How might we best communicate these findings to decision-makers? 

9. Reporting

Don’t forget to write about your data party in your report – highlight your approach in the methods section, and in the results and recommendation sections don’t forget to credit ideas to stakeholders (you can use call-out boxes to distinguish findings).   

10. Fun

Try to make your data party fun and engaging.  Some ideas include offering food (can we suggest cake?) setting an energetic tone by designing a cool invitation, starting the event with a short but fun icebreaker, and sharing the evaluation findings in a creative way (also maybe with cake!). 


Have you organized a data party? How did it go?  Let us know your experience in the comments.  


Sign up for our newsletter

We’ll let you know about our new content, and curate the best new evaluation resources from around the web!


We respect your privacy.

Thank you!

 

Written by cplysy · Categorized: evalacademy

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 144
  • Go to page 145
  • Go to page 146
  • Go to page 147
  • Go to page 148
  • Interim pages omitted …
  • Go to page 310
  • Go to Next Page »

Footer

Follow our Work

The easiest way to stay connected to our work is to join our newsletter. You’ll get updates on projects, learn about new events, and hear stories from those evaluators whom the field continues to actively exclude and erase.

Get Updates

Want to take further action or join a pod? Click here to learn more.

Copyright © 2026 · The May 13 Group · Log in

en English
af Afrikaanssq Shqipam አማርኛar العربيةhy Հայերենaz Azərbaycan dilieu Euskarabe Беларуская моваbn বাংলাbs Bosanskibg Българскиca Catalàceb Cebuanony Chichewazh-CN 简体中文zh-TW 繁體中文co Corsuhr Hrvatskics Čeština‎da Dansknl Nederlandsen Englisheo Esperantoet Eestitl Filipinofi Suomifr Françaisfy Fryskgl Galegoka ქართულიde Deutschel Ελληνικάgu ગુજરાતીht Kreyol ayisyenha Harshen Hausahaw Ōlelo Hawaiʻiiw עִבְרִיתhi हिन्दीhmn Hmonghu Magyaris Íslenskaig Igboid Bahasa Indonesiaga Gaeilgeit Italianoja 日本語jw Basa Jawakn ಕನ್ನಡkk Қазақ тіліkm ភាសាខ្មែរko 한국어ku كوردی‎ky Кыргызчаlo ພາສາລາວla Latinlv Latviešu valodalt Lietuvių kalbalb Lëtzebuergeschmk Македонски јазикmg Malagasyms Bahasa Melayuml മലയാളംmt Maltesemi Te Reo Māorimr मराठीmn Монголmy ဗမာစာne नेपालीno Norsk bokmålps پښتوfa فارسیpl Polskipt Portuguêspa ਪੰਜਾਬੀro Românăru Русскийsm Samoangd Gàidhligsr Српски језикst Sesothosn Shonasd سنڌيsi සිංහලsk Slovenčinasl Slovenščinaso Afsoomaalies Españolsu Basa Sundasw Kiswahilisv Svenskatg Тоҷикӣta தமிழ்te తెలుగుth ไทยtr Türkçeuk Українськаur اردوuz O‘zbekchavi Tiếng Việtcy Cymraegxh isiXhosayi יידישyo Yorùbázu Zulu