• Skip to main content
  • Skip to footer
  • Home

The May 13 Group

the next day for evaluation

  • Get Involved
  • Our Work
  • About Us
You are here: Home / Archives for cplysy

cplysy

Oct 31 2022

Incentives for participation

I’ve been thinking recently about bias in evaluation methodology. There’s so many different types of bias it’s hard to keep them all straight, never mind mitigating them all!

One particular area has me interested: 

What bias does offering an incentive to participate introduce?

So, let’s broaden the question and explore incentives for participation generally (don’t worry, we’ll cover bias too).

As evaluators, one of our primary activities is data collection. Some of the most common methods include asking other people to share their thoughts or experiences in surveys, interviews, or focus groups. Sometimes it can be difficult to get people to agree to participate. It’s very enticing to offer an incentive to boost that response rate or sample size. 


Why offer an incentive?

Many argue, myself included, that those with lived experience are experts and knowledge keepers, and should be compensated for sharing their expertise. So, incentives can serve a moral purpose, offering compensation in exchange for expertise.

Incentives can also serve the function of attempting to mitigate non-response bias. Theoretically, by increasing response rates, your data will be that much more robust and representative of the population you are hoping to report on.


Is cash king?

The two most common forms of incentives are cash or gift cards. So, which should you use, when, and why? 

When I was a graduate student, I ran a study that collected data from individuals experiencing challenges with substance use. The Research Ethics Board (REB) at the university would not let me offer cash as an incentive to participate. It never sat well with me. Ethically, who am I, as a researcher or evaluator, to decide whether a participant “deserves” cash or not? Who am I to declare that they can only spend earnings at gift card locations?  Besides, there’s nothing stopping anyone from re-selling gift cards for cash. It seemed like a misstep to me.

More recently I offered a cash incentive to participants for an interview. Many of those participants voluntarily expressed appreciation for the cash support. That project felt better for me.

So, morally, I am a believer that cash is appropriate. Functionally, research says that money is more motivating than other gifts. So, yes, cash may actually be king.


Low budget options

What happens if you don’t have the budget for cash or gift cards? There are still ways to offer incentives.  Alternatives could be:

  • Access to products or services (e.g., 1 month of free access to the services of the organization)

  • Company swag or merchandise 

  • Discounts or coupons (e.g., a 50% discount on your next program registration)

Or perhaps you don’t have the budget to offer everyone an incentive, but you can afford something. In these cases, I’ve used the “chance to win” approach. This is more common in surveys, whereby filling out a survey the participant is entered into a draw to win something, usually a gift; but there’s no reason this couldn’t also be cash. 

CAUTION: Make sure that your cash-alternative prizes (e.g., gift cards or products) don’t introduce new forms of bias!  If you advertise that participants get gift cards to a specific store or restaurant you may be biased toward people who frequent those businesses. If you advertise that participants get a discount on products or services, you may be biased toward people interested in those products or services. 


What about coercion?

I said we’d talk about bias. When offering an incentive, the risk is that it introduces coercion: the practice of persuading someone to do something. It’s a fine balance between motivating someone to spend their time sharing their experience with you in exchange for a token of appreciation, versus inducing a feeling of being compelled because the incentive is so enticing. Coercion exists when a participant accepts otherwise intolerable risk to receive the incentive. I think this risk is real, however, some research has shown that any risk associated with participation is not influenced by incentives. This study found no relationship between the size of the risk and the size of the incentive. 

It may be impossible to prove that an incentive isn’t coercive. The best way to protect participants from risk isn’t by making changes to your incentive, but by examining policy, protocol, and consent within your project or program. Probably the best mitigation strategy is informed consent, letting participants know about their rights to refuse, clearly outlining any risk to participation, and describing how participation does or does not affect their access to services.  Related to this, refusing access or service based on participation is definitely coercive!


What impact does offering an incentive have on your data?

One reason to offer an incentive is to increase the response rate. Increasing response rates should improve your data quality through more valid, representative data. However, there are negative impacts as well – including introducing profile bias based on the specific incentive (as cautioned above). Incentives could also change the responses themselves, either because participants feel compelled to respond positively to receive the incentive, or with even more subtle changes, like improved mood. However, some research suggests that these influences are unlikely.

Ideally, we want people to participate because they care about the outcome, or they have a vested interest in the program or organization. However, motivations to participate cover 3 primary reasons: 1) altruism 2) topical reasons, e.g., the desire to share a specific positive or negative experience, or interest in the topic or 3) egoist reasons, i.e., for the incentive. Most evaluations probably have a mix of these motivations. It can be difficult or impossible, and probably unnecessary to tease them apart. Ensuring strength in your data collection tools and processes will help to mitigate any risk associated with egoist participation.


What is participation worth?

A colleague of mine recently had to counsel a client on whether $100 for survey participation was coercive or not. While it’s not common to have to debate if the incentive is too much, it’s still a good lesson in ethical practice. And begs the question, what is participation worth?

There’s no one answer. In fact, there isn’t even a “right” answer. It depends on several factors including:

  • time burden: is it a 5-minute survey, or a 3-hour focus group?

  • how intense is the participation process?

  • how specific or niche, and therefore possibly difficult to reach, is your target participant group?

  • how specialized is the knowledge or experience the participant group brings? 

  • your project budget!

The higher the burden or intensity, or the more specialized the nature of participation, the higher the perceived value.

Wellesley Institute, out of Toronto, did some exploration in 2018 and found that in research (i.e., not evaluation), participants were paid, on average, $30 for an interview, $25 for a focus group, and $20 for a survey. In my own experience, this is pretty close in evaluation as well. Depending on budget and level of burden (i.e., time commitment) I’ve offered up to $50 for interviews. Wellesley suggests that a good rule of thumb is $25 per hour of participation.


Appropriateness of incentives

While cash may be king, there are other considerations when deciding on incentives. I’ve recently been working on a project where paying for services may introduce trauma for some participants. I’ve worked closely with this project team to develop a plan where the participants are compensated for sharing their experiences, but without the transactional nature of giving an incentive after participation. Specifically, persons with lived experience are invited to participate in an interview; regardless of participation, they are also offered extra support to meet any emergent needs they may have. That extra support is not based on their participation, but on the invitation.

Vulnerable populations may respond to incentives differently, by being more likely to accept risk. Vulnerable populations are also frequent target populations for research and evaluations.  This is a great resource that offers more guidance on compensation for persons with lived experience, including: discussing compensation clearly and upfront, offering options, paying in cash, and offering to pay for additional costs (like child care).

Depending on your participant population there are groups and cultures where exchanging gifts is an act of respect. Consider reviewing your incentive practice from a lens of diversity and intersectionality.

Finally, there is some grey area about who is offered an incentive. For example, you are evaluating a program, commissioned by the organization that runs the program. Part of your evaluation plan is to interview the program participants. I think we can safely agree that offering some form of incentive is justified. Part of the same evaluation plan is to interview the organization leaders and the facilitators of the program. Does it make sense to offer them an incentive? Or is it “part of their job” to participate? To pay them for participation is essentially paying them out of their own budget. In my experience, I have differentiated between those participating in a professional capacity (i.e., not incentivizing staff) and those volunteering to participate (i.e., incentivizing lived experiences or program participants).

A key to ensuring your incentives are appropriate is an open dialogue with your project team, which hopefully includes a representative from your target population. Engage in your own reflective practice as an evaluator and seek input from your colleagues to identify risks you may have missed.


Other Considerations

  1. To offer an incentive, you may need to collect some contact information. With so much data collection being done virtually these days, many incentives are offered electronically, where email addresses are required. This has at least two impacts:

    • Is your participant population likely to have an email address and the capacity to retrieve and access an e-transfer or electronic gift card?

    • Does collecting contact information introduce any ethical risk around identification or breach of confidentiality?

  2. In my experience where most data collection is happening virtually, the idea of bringing food and/or beverages is a moot point, but in the past, I have offered focus group participants snacks and drinks. These “perks” to participating are also an incentive and should be considered thoughtfully.

  3. Arguably outside the discussion of incentives specifically, is reimbursement. Some participants may need to travel or enlist child or elder care to accommodate their participation. It is common practice to offer reimbursement for such expenses, which can take many forms, including cash, bus tickets, cab fare, mileage payment, etc.

  4. I’ve had discussions that the incentive isn’t an incentive, per se, but a “thank-you”. This can be true if the thank-you is given after participation without prior knowledge, or perhaps if the value is so low that it would in no way incentivize anyone. Similarly, some participants have regular hourly rates and can be offered an honorarium to compensate them directly for their time. These distinctions between a thank-you, an honorarium, and an incentive are very blurred. As always, data collection processes and informed consent are the best ways to mitigate risks!


There’s a lot of research about incentives, much more than I’ve touched on here. Hopefully, I’ve covered some of the important ground to guide you in deciding whether to offer an incentive and what form and value that incentive should be.

Remember to include the use of incentives in your evaluation contracts. The expense can be significant!


Sign up for our newsletter

We’ll let you know about our new content, and curate the best new evaluation resources from around the web!


We respect your privacy.

Thank you!


Written by cplysy · Categorized: evalacademy

Oct 31 2022

Using log frames: why they’re useful and how to make one

A Log Frame is a tool that has mainly been used for designing, monitoring, and evaluating international development projects. Using this tool is a way of structuring the main elements in a project or program, and highlighting the logical connections between them. 

In this article, we explain what a Log Frame is (spoiler: it’s not a logic model), why they can be useful tools for program planning and evaluation outside of international development, and how to make one of your own.


What is a Log Frame?

The Log Frame was developed back in 1969 for the U.S. Agency for International Development (USAID). It was developed as a planning tool for international development programs in the form of a matrix which presents a program’s main goal, activities, and what these activities are expected to lead to.

This visual approach helps you to think about the relationships between available resources, planned activities, and the desired changes or results. This structure also helps to explain the linkages between a program’s components.

It follows this main idea:

  1. The successful completion of these activities = the production of these outputs

  2. The production of these outputs = these changes through outcomes

  3. These changes through outcomes = the achievement of these objectives

  4. The achievement of these objectives = contributions to the larger goal

Therefore, this tool can help program planners to explain how they believe change will be realized. For evaluation purposes, the Log Frame also identifies the measures and indicators that will help to monitor the program’s anticipated results. 

Although there are a few models that focus on logical links between a program and its contribution to success, be careful not to confuse Log Frames with the following:

  • Results Frameworks: Log Frames are focused more on how you will get to your program’s goal. Results Frameworks focus more on explaining the program’s results.

  • Logic Models: although the two are very similar, a Log Frame is depicted using a matrix or table, whereas a Logic Model is shown using a flow chart.

  • Theory of Change (ToC): ToC is also used for program design, whereas Log Frames are useful mainly for evaluation only. A ToC is also more explanatory and for explaining more complex initiatives, whereas Log Frames are descriptive and better placed for small to medium sized projects. Log Frames don’t easily capture the how and why in the same way a ToC does.


When and why are Log Frames used?

Since their development for USAID, Log Frames have become a standard approach required by donors for grant applications. For international development programs, they have become a staple of project planning and evaluation. Due to their utility and accessibility, they’re now also being used for programs outside of the international development realm.

Most Log Frames are developed during program design and are updated throughout the program’s lifespan. Log Frames are not set in stone and should be flexible to the program’s needs and any changes happening on the ground. Developing a Log Frame at the program planning stages helps to involve the whole team and allow key stakeholders to provide ideas about how they see the program operating to reach its goals.

Log Frames can be used for the following:

  • To confirm the theory of why your program will result in the desired change. Log Frames can help you to see whether your program will really work and identify any flaws in the theory. It’s all about being logical!

  • Help you in making strategic decisions that align with the logic of how your program will contribute to change and reach its goal. It can also help you to allocate resources to where they will be used most effectively.

  • To support transparency by clearly describing what your program aims to achieve and how. This can help your program to be more attractive to donors by clearly and logically explaining your ideas. It can also help you to make sure that all staff are on the same page.

  • By including indicators of how you will measure change, a Log Frame helps you to develop how you will monitor and evaluate your program. This provides the groundwork for how an evaluation can measure the impact of your program. 


What are the key components of a Log Frame?

A Log Frame is commonly presented as a 4×4 table with 16 cells, though this can be modified if needed. Each row represents types of events that take place to help the program achieve its goals. Although the wording may differ slightly, these include:

  • Main Goal

  • Outputs

  • Outcomes

  • Activities

The first column describes how the program will reach its objective. The second and third columns summarise how the program’s achievements will be tracked through indicators (i.e., measurement of change) and sources of information (i.e., information needed that will allow indicators to be measured). The last column lists the assumptions or risk analysis. These are the factors outside of the program’s control that are necessary to ensure the program’s success.


How do I develop a Log Frame?

It is important to develop a Log Frame with key stakeholders included in the program. Collaborating helps to make sure that it is not developed through a “top-down” approach and leads to better program planning by ensuring that everyone is on the same page in terms of shared objectives. Developing a Log Frame often works best through a mind-mapping session.

Firstly, it is important that there is a clearly established problem your program aims to address. You should also be very aware of the context within which your program will operate. It helps to think about setting your goals first and then working backwards to the grassroots to see what you need to do to reach these goals.

The main questions to consider in the development of your Log Frame include:

  • What will our program achieve?

  • What activities will we complete?

  • What resources will we need to do this?

  • What are the problems and challenges we might face along the way?

  • How will we measure the progress and outcomes of the program?

Example of a Log Frame (Adapted from Tools4Dev)

There can be more than one indicator and assumption listed for each level, but it is good to keep these manageable by using tools such as SWOT analysis (see our free template here), Stakeholder Analyses, and a Risk Matrix.

A Log Frame should also be tested to check the logic. A simple test is to ask the following:

  1. IF these activities are undertaken AND the assumptions hold true, THEN the intended outputs will be created

  2. IF these outputs are delivered AND the assumptions hold true, THEN the purpose will be achieved

  3. IF the purpose is achieved AND the assumptions hold true, THEN the intervention will have contributed to the overarching goal

(Adapted from DFID, 2011)

Once developed, make sure that the Log Frame doesn’t just sit on a shelf throughout program implementation. Review it frequently and use it to support decision-making, manage activities, assess progress, and keep stakeholders aware of plans to monitor progress.


Pros & Cons of using a Log Frame

Benefits of a Log Frame:

  • It ensures objectives are clear and measurable

  • It ensures concrete evidence for achievement is collected

  • Because risks and assumptions are made explicit, problems can be analyzed systematically

Weaknesses of a Log Frame:

  • It may cause rigidity in program management if it is not viewed as something that can be updated throughout program implementation

  • It is a “one size fits all approach” which does not always capture the complexity and context of a program


Have you worked with Log Frames before, or have questions? Comment on this article or connect with us on LinkedIn or Twitter!


Sign up for our newsletter

We’ll let you know about our new content, and curate the best new evaluation resources from around the web!


We respect your privacy.

Thank you!


Written by cplysy · Categorized: evalacademy

Oct 26 2022

Human Centered Report Design

Are you searching for a modern reporting strategy that leads to visual reports other human beings would actually appreciate? I wrote you an eBook.

Human Centered Report Design
A visual guide to the freshspectrum content & communications strategy.
You can download the new eBook here.

Here is what you’ll find inside the 35 page visual eBook. It’s a super easy picture book that should only take you a few minutes to get through.

The Why Behind it All

This is my digital audience theory. I walk through the usual stages of modern development based on my experiences with how most organizations currently approach this work. It’s a pretty inefficient process.

What to Prioritize

In this chapter I talk about the freshspectrum Content Design Strategy. The goal is to be more efficient and focus first on the reporting pieces that provide you with the most value for the time.

Feed the Media Monster

Knowing what to create is one thing. The next thing you need to know is how to share your reporting. In this chapter I walk through my basic communications strategy.

Personal Next Steps

Finally I close with a few suggested next steps if you’re looking to take your work further.

Grab the eBook

Download Here

Written by cplysy · Categorized: freshspectrum

Oct 25 2022

AEA Coffee Break: Five Core Processes for Enhancing the Quality of Qualitative Evaluation 

Presenter: Jennifer Jewiss

Date: 25 October 2022

The presenter had reflective questions for the audience, so I figured I’d put mine here, along with my notes from the webinar.

Reflective questions 1: When I think of qualitative approaches to evaluation, the following words come to mind:

  • open
  • emergent
  • unexpected
  • nuance
  • deep
  • devalued by some
  • harder than people think

They put together a book on qualitative methods in evaluation with chapters authored by many evaluators, then identified themes of what makes up quality in qualitative inquiry:

  1. acknowledging who you are and what you bring to the work (you)
    • positionality
    • how do facets of your identity, history, etc. intersect
    • how does it enrich and limit your work as an evaluator?
      • what blind spots do you have? what learning do you need to do?
  2. building and maintaining trusting relationships (us)
    • throughout the entire evaluation
  3. employing sound & explicit methodology (process
    • a wide array of things that can be done in qualitative inquiry
  4. staying true to the data (what we find)
    • hearing and representing the voices and perspectives of participants
    • be really conscious of what you might be bringing to bear on the data (our own priorities, biases, desires) – monitor that to “keep it in its proper place”
  5. fostering learning (what we learn)
    • helping everyone involved learn, including ourselves
    • open-ended learning helps people to surface that tacit knowledge
  • these things are not unique to qualitative
  • a cycle, not linear. They wanted a spiral/dynamic diagram, but publisher suggested a cycle would be more clear

Reflective question 2: how might one use this model to information qualitative evaluation practice?

  • presenter suggested that each of these elements could be a prompt for reflective writing or reflective art (drawing, collages, etc.)

Written by cplysy · Categorized: drbethsnow

Oct 25 2022

Insights on participant ownership in evaluation and learning

A starting list

Earlier this year we were asking the representative of a grassroots organization whether and how they would be interested in being involved in the learning and evaluation work of one of our foundation clients. They told us that what is truly real is land, water, nature, and animals, not professional titles. Their words stayed with me, reminding me how my evaluation and learning practice truly matters when it serves the communities closest to the issues and solutions that it seeks to understand, as it is these very communities who know what is “truly real” about their experiences, relationships, and lands.

In the past, this was not what I understood my practice was about, or how the field of evaluation conceived its mission. As many outstanding practitioners like Jara Dean-Coffey, Dr. Nicole Bowman, Dr. Geri L. Peak, Dr. Bagele Chilisa, and Dr. Donna M. Mertens have shown, overtly or under the cloak of pursuing the western-colonial construct of objectivity, evaluation has historically centered the learning needs and interests of those groups — usually public administrations and philanthropic funders — that held the most structural and positional power in the ecosystem of those involved in the process. I, too, for many years, understood the primary beneficiary of my work to be whoever had commissioned it. Bringing the lived experience and voices of those most impacted into learning and evaluation was a “nice to-”, as opposed to a “must-have”.

As the Equitable Evaluation Initiative has taught our field, reframing who should be the ultimate beneficiaries of our learning and evaluation work makes sense for everyone. For administrations and foundations that want to ensure the impact of their investment in non-profits, advocacy efforts, and communities, but even more so for those engaged in the evaluation, who are the very same communities and organizations that funders and their investments seek to support. An evaluation approach that defers participants’ or grantees’ learning needs and ways of knowing provides communities with the power, tools, and resources they need to self-determine, tackle both internal and external challenges, and create stronger solutions for themselves. In doing so, it also truly fulfills the ultimate scope of public or philanthropic endeavors aimed at furthering equity and social justice by effectively supporting the communities and issues they seek to elevate. Understanding and truly believing in this now, as an evaluation and learning practitioner, my purpose for practicing participant ownership in projects is also driven by a desire to advocate for power to be in the hand of communities who have the most experience and knowledge about the very issues that affect them directly.

This may sound wonderful but, in actuality, how do we do this? It is hard to make learning and evaluation valuable and engaging to communities because for so long our practice has been far removed from their ways of working, learning, and knowing, taking away from their power to create the change they envisioned. Also, while evaluators and funders are increasingly seeking to engage and center partners, grantees, and clients in their evaluation processes, there is no wholesale approach to participant ownership. The characteristics and interests of the communities we wish to defer to, the nature and quality of relationships between parties involved, and the degree to which evaluators and funders are willing to cede power are among the many factors that influence the design and implementation of projects that authentically center the communities closest to the issues of focus.

So, while it would be disingenuous to say that we are not implementing and learning at the same time, I think it is crucial that we share what we are learning about what it takes to practice participant ownership, not only to further this practice but also to seek the feedback and invite the accountability of those organizations and communities whose learning we seek to facilitate. It is this desire to be in community on this journey that leads me to share and invite your feedback on a few lessons we, at Innovation Network, have learned across the pilot projects we are working on :

1 Participant ownership, from start to finish, necessitates that we ask and remind ourselves of why we are doing this and why this work is important. So, at the start of each project, we have a discussion among ourselves and with our clients about the purpose and values that we want to underpin it (thank you, Heather Krause and Katie Fox for teaching us the importance of doing so). We use the principles distilled in these conversations as a compass throughout a process that may often get complicated as we challenge preconceived notions of power within evaluation and learning. Further, remembering that we do this to shift power to communities to drive involvement that is meaningful to them, safeguards us from perpetuating cosmetic or performative approaches and processes.

2 Also, at the onset and throughout an engagement, it is crucial to set clear expectations for those engaged in the learning or evaluation project. Most often our client is a foundation, while the groups running the organization or implementing the program or activities that we seek to evaluate and learn from are the evaluation participants. To ensure expectations are clear and transparent from the beginning, we have an open conversation with the foundation client about the importance of participant ownership and the need to promote equity within these processes. We also establish to what extent the client is willing to engage with participants, how, and why. From the participants’ end, this also requires doing outreach to understand from them what type of engagement would be meaningful and how barriers to participation can be mitigated. To facilitate these conversations we often use Rosa Gonzales’ Spectrum of Community Engagement to Ownership, which helps to create clarity about what different levels of participant ownership entail.

3 Once expectations have been set, we make conscious efforts to create spaces conducive to participation. To foster participation we must lean into discomforts that, while difficult, often result in the most powerful insights. These include acknowledging that deferring to participants in decision-making for the project entails relinquishing control and that we are no longer the owner but rather the facilitator of the project. We are working to become comfortable with not having a concrete path laid out from the start and not being the ones setting project plans or agendas. Rather, whether it’s in a meeting or the overall design and implementation of a project, we hold containers and listen deeply, so that we can truly shape our work according to the needs and interests of participants. When needed, we also help the client buy into this by providing more education and encouraging them to be excited about the actions they are taking to promote social justice and equity in the process. Often times this process also requires some empathic negotiation skills and the willingness to enter a space of discomfort in the evaluator-client relationship, one in which we ourselves experience and are often challenged by the power dynamics.

We are, once again, conscious that we are learning and growing in our methods continuously as we strive to align our practices with our values. You can learn more about how we at Innovation Network are powered by our values, here. We know that we don’t have all the answers and what we have learned builds on the efforts of those that came before us. So please share with us, what is your team learning about what it takes to authentically foster participant ownership?


Insights on participant ownership in evaluation and learning was originally published in InnovationNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.

Written by cplysy · Categorized: innovationnet

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 113
  • Go to page 114
  • Go to page 115
  • Go to page 116
  • Go to page 117
  • Interim pages omitted …
  • Go to page 304
  • Go to Next Page »

Footer

Follow our Work

The easiest way to stay connected to our work is to join our newsletter. You’ll get updates on projects, learn about new events, and hear stories from those evaluators whom the field continues to actively exclude and erase.

Get Updates

Want to take further action or join a pod? Click here to learn more.

Copyright © 2026 · The May 13 Group · Log in

en English
af Afrikaanssq Shqipam አማርኛar العربيةhy Հայերենaz Azərbaycan dilieu Euskarabe Беларуская моваbn বাংলাbs Bosanskibg Българскиca Catalàceb Cebuanony Chichewazh-CN 简体中文zh-TW 繁體中文co Corsuhr Hrvatskics Čeština‎da Dansknl Nederlandsen Englisheo Esperantoet Eestitl Filipinofi Suomifr Françaisfy Fryskgl Galegoka ქართულიde Deutschel Ελληνικάgu ગુજરાતીht Kreyol ayisyenha Harshen Hausahaw Ōlelo Hawaiʻiiw עִבְרִיתhi हिन्दीhmn Hmonghu Magyaris Íslenskaig Igboid Bahasa Indonesiaga Gaeilgeit Italianoja 日本語jw Basa Jawakn ಕನ್ನಡkk Қазақ тіліkm ភាសាខ្មែរko 한국어ku كوردی‎ky Кыргызчаlo ພາສາລາວla Latinlv Latviešu valodalt Lietuvių kalbalb Lëtzebuergeschmk Македонски јазикmg Malagasyms Bahasa Melayuml മലയാളംmt Maltesemi Te Reo Māorimr मराठीmn Монголmy ဗမာစာne नेपालीno Norsk bokmålps پښتوfa فارسیpl Polskipt Portuguêspa ਪੰਜਾਬੀro Românăru Русскийsm Samoangd Gàidhligsr Српски језикst Sesothosn Shonasd سنڌيsi සිංහලsk Slovenčinasl Slovenščinaso Afsoomaalies Españolsu Basa Sundasw Kiswahilisv Svenskatg Тоҷикӣta தமிழ்te తెలుగుth ไทยtr Türkçeuk Українськаur اردوuz O‘zbekchavi Tiếng Việtcy Cymraegxh isiXhosayi יידישyo Yorùbázu Zulu