• Skip to main content
  • Skip to footer
  • Home

The May 13 Group

the next day for evaluation

  • Get Involved
  • Our Work
  • About Us
You are here: Home / Archives for cplysy

cplysy

May 12 2020

Recursos para trabajar y facilitar a distancia

 

Muy relevante post en su blog “Agilefacile” del gurú de la gestión del conocimiento y la facilitación, pero sobre todo excelente persona, Ewen LeBorgne, sobre la “Facilitación en línea: Una visión meta de los recursos para trabajar y facilitar a distancia más efectivamente” (“A meta look at resources to work and facilitate online more effectively“)

De forma muy resumida nos indica que la facilitación en línea en realidad sigue muchos principios de facilitación cara a cara. Sin embargo debemos tener en cuenta algunos elementos prácticos, logísticos, de diseño y emocionales:

  • Necesitamos conversaciones sincrónicas (al mismo tiempo) o asincrónicas (en momentos diferentes), según la distribución geográfica.
  • La mejor división del tiempo.
  • Cómo romper el hielo o ‘leer las emociones de las personas”

Da varias fuentes, pero a modo de recomendación estrella recientemente se ha diseñado el “Kit de herramientas de recursos para reuniones en línea para facilitadores” durante la pandemia de coronavirus,  gracias al “Grupo de facilitadores para la respuesta a la pandemia” donde podemos profundizar sobre conceptos relacionados básicos y avanzados: “mudarse al completo” al trabajo en línea, equipos virtuales, e incluso sobre artefactos de facilitadores para reuniones de zoom durante la respuesta al Covid19. Facilitemos la facilitación a distancia en estos tiempos inciertos

Written by cplysy · Categorized: TripleAD

May 12 2020

Visualizing COVID-19 Data Responsibly: An Interview with Amanda Makulec

In April, I sat down with Amanda Makulec, one of my longtime data and evaluation friends, to learn about visualizing COVID-19 responsibly.

Amanda is the Data Visualization Capability Lead at Excella; a co-organizer for Dataviz DC; and the Operations Director for the Data Visualization Society (DVS).

She’s also one of the most knowledgeable people around when it comes to visualizing COVID-19 data.

Listen to Our Conversation Here

What’s Inside: Amanda’s Career Path

“Ten years ago, when I finished graduate school, I couldn’t have guessed that this would be my full-time job and I could wear as many hats as I do in the data viz world. Now I think it’s really exciting because there are a lot of paths into data viz,” Amanda told me.

Excella

Amanda is currently the Data Visualization Capability Lead at Excella, a technology consulting firm. She works with organizations from the CDC to Fortune 500 companies.

“It’s been a really transformational learning experience for me to see not just different ways data and tech get used, but the different ways projects get managed. I’ve learned a lot more about Agile processes and software development and thinking about how some of those same practices actually apply when we’re building different analytical applications like dashboards,” Amanda said.

Amanda Makulec is currently the Data Visualization Capability Lead at Excella, a technology consulting firm.

Previously, she worked in global public health.

DataViz DC

Amanda is also involved with DataViz DC. They focus on bringing people together from various disciplines, from graphic designers to software developers. They host monthly meet-ups, which have included hands-on workshops, guest speakers, and career panels. It’s a great way to connect in the DC area- there are over 8,000 members!

Amanda Makulec is also involved with DataViz DC. They focus on bringing people together from various disciplines, from graphic designers to software developers.

Data Visualization Society

Amanda is the Operations Director for the Data Visualization Society.

They are focused on bringing people together across the world and serve as a global, professional organization for dataviz professionals at any level. They have over 13,000 members from more than 130 countries around the world. They communicate through Slack, email and Fireside Chats with panels of dataviz experts.

Amanda told me that the Data Visualization Society is “a great space for actually bringing together different disciplines. Instead of focusing on one tool or tech stat, we instead said, ‘How do we bring together the people that are individually engaged in Tablueau groups? Power VI groups? R groups? Graphic designers? How do we bring all those people together and create a space for people early in their career or looking to change careers and do data visualization as their full-time job and create a space for them to grow and learn and share best practices?’”

She continued, “While we use different tools and technologies in different data viz disciplines and roles, I think there are so many cross cutting best practices that once you learn and master them in one tool, it’s really easy to think about how they are used and applied in other spaces. DVS tries to create that central space to bring people together and really advance the data viz discipline as a practice profession.”

Amanda Makulec is the Operations Director for the Data Visualization Society.

Visualizing COVID-19 Data Responsibly

Next, I asked Amanda to share tips for visualizing COVID-19 data responsibly.

She wrote “Ten Considerations Before You Create Another Chart About COVID-19” on the Data Visualization Society’s blog in March 2020, and I also wanted to know whether her guidance had evolved or shifted since writing the article.

Amanda Makulec wrote “Ten Considerations Before You Create Another Chart About COVID-19” on the Data Visualization Society’s blog in March 2020.

Amanda said, “There are different points in which we make decisions about how and what we visualize, and then how we publish and share. What are we creating and doing more for our own exploration and understanding? And what are doing so that we can share it with the public to help others make sense of information?”

Amanda went through some of the top considerations, from data quality, to data collection, to remembering the people behind the data, to color choices.

COVID-19 Data Quality Issues

Amanda said, “Consider the fact that even though the dataset are very accessible right now, does not mean it is high quality data.”

“There are so many issues and challenges with the different ways that COVID-19 cases are counted in different states or countries. Are we including only cases that have been lab confirmed with a swab test that came back positive? Or are we also including probable cases or diagnostically confirmed cases?

How those cases are counted are very different in some states and countries. It’s really hard to make these apples to apples comparisons, as easy as it might seem since the data is so accessible.”

Understanding COVID-19 Data Collection

Amanda said, “If you’re going to dabble in COVID-19 data in some way, try to really understand how the data gets collected so that you have a firm understanding of what that process looks like and why there might be issues with the accuracy, timeliness or completeness of those datasets.”

She continued, “Make sure you understand how the data got collected. Just because it’s there as a nice shiny, analyzable table doesn’t mean it’s not something you should try to understand the underbelly of.

In your choices around how you analyze and design certain visualizations, be mindful that the data that we have is really incomplete. The case data is really a function of how many tests are being done.  As a result, where we have more certainty on the death counts, deaths are also a function of cases.

So we have to think about the fact that we have a lot of cases not represented in that data. We’re seeing that come out more and more in some of the retroactive reporting being done in countries that are farther along in their epidemic curves.

Remember that there’s a lot of uncertainty in the data and it’s not uncertainty that we can represent visually well. We can’t always quantify that uncertainty.

Make sure that you’re considering the ways in which your visualizations could be misinterpreted or misused.”

Be Careful with Comparisons and Reference Points

Amanda said, “One of the common comparisons I’ve seen is comparing COVID-19 to the flu.

We’ve seen that in the media and early on even public health folks were trying to make sense of this disease by comparing it to the flu.

When we look at how we collect data on the flu in the US, we have routine, structured reporting systems for that data with better quality data. We have a disease that comes in seasonally and we understand what that seasonality looks like. We don’t know that about COVID-19.

So comparing cases in March for COVID-19 and the flu [doesn’t make sense]. We’re in a very different point of that epidemic curve for COVID-19. It really isn’t an apples to apples comparison. We’d need a whole year of COVID-19 data to start to make that comparison.

So be cautious in how you start to try to create those reference points which can help us enable understanding, but also can mislead.”

Remembering the People Inside the COVID-19 Datasets

Amanda said, “Remember that every single case and every single death represents a person. As we visualize and think about health-related data, thinking about that fact that each of those cases and each of those deaths represents a person and their story, makes it really important to be thoughtful and mindful about how we’re presenting that information.”

COVID-19 Data Visualization Color Choices

Amanda said, “Those red, big bubble maps can be hard to interpret, but the color choice also creates such a visceral, angry, sad response. I hope we can be thoughtful in the ways our visualizations can create emotional responses especially when visualizing such sensitive data.”

Connect with Amanda Makulec

  • Data Visualization Society: DataVisualizationSociety.com/Join
  • Slack Workspace: DataVizSociety.Slack.com
  • Twitter: @ABMakulec

Written by cplysy · Categorized: depictdatastudio

May 12 2020

Sampling: What does “representative” mean during and after coronavirus?

Since the reality of coronavirus set in back in March, our RK&A team has been having a lot of conversations about study design.  Museum closures and social distancing have greatly impacted the way we do our work as evaluators.  They have affected our clients, project timelines, data collection methods, and access to study respondents in one of our most frequent settings—the museum floor.  Sampling has always been one of the top questions we are asked about, and it is something we very carefully consider when designing our studies, no matter if the study is small or large (see, for example, our previous posts on sampling transparency and sample sizes for qualitative and quantitative studies). One question I have been wrestling with lately in light of coronavirus is the idea of capturing a “representative sample”—that is, a sample that shares the same characteristics of the museum’s visiting population (or whatever population we are seeking for a particular study).

Often, when we recruit visitors for a study at a museum, we use a random sampling approach.  The data collector imagines an invisible line on the floor, intercepts the first visitor to cross over that line, and asks them to participate in the study.  After completing the interview or questionnaire with the visitor, the data collector returns to their recruitment location and selects the very next person to cross their imaginary line.  The rationale for random sampling is that it is more likely to result in a sample that mirrors the museum’s visiting population (for more on sampling protocols, see Amanda’s post here). We use additional measures like comparing observable characteristics (i.e., estimated age and group composition) of visitors who decline to participate (our refusal sample) in the study with the sample characteristics to understand potential gaps in our sample.  All of this information can be placed within the context of a museum’s known visitor demographics (from audience research or other sources) to understand whether a study sample is representative of the museum’s visiting population.

Under pre-pandemic circumstances, this is all well and good.  But now, with the uncertainty of what visitation will look like over the coming months and potentially years as museums phase into reopening with limitations on visitor capacity and new social distancing measures, I wonder what does a “representative sample” mean now?  Are we aiming for our study samples to be representative of the visitor population before the coronavirus?  I’m not sure how useful that is considering visitation will probably not return to what it was pre-pandemic, at least not for quite a while.  In addition to reduced numbers, it would not be surprising to see demographic shifts in visitation in response to the pandemic (e.g., fewer vulnerable groups, like adults over 60).

Two circles show examples of museums' visiting populations before and after coronavirus. There is a higher number of people in the pre-coronavirus example, and fewer people in the post-coronavirus example.

We always strive for rigor in our evaluations, and responsiveness and transparency in study design are equally important as we learn to adapt to our ever-changing world.  As Heather Krause of Towards Data Science wrote in a recent blog post, “The goal is to retain as much value in the data you currently have and analyze and understand it in ways that make sense now.”  I don’t yet have an answer for what a “representative sample” will mean for our upcoming studies, and I think the answer may vary based on the museum, exhibition, or program.  Still, I can be responsive to both the circumstances of the pandemic and the needs of our clients by having frank conversations about sampling and what information will be most meaningful and actionable.  And, I can make decisions and approaches clear in our evaluation plan and reporting so that we are all on the same page about what the data does and does not represent.  I look forward to working toward a clearer understanding of what “representative” means for sampling in the coming months.

The post Sampling: What does “representative” mean during and after coronavirus? appeared first on RK&A.

Written by cplysy · Categorized: rka

May 11 2020

How to Write Good Evaluation Questions

 

Evaluators ask questions. All the time. We ask questions in focus groups, we write questions in surveys, we pose questions to our datasets. But the questions that really drive our work are evaluation questions.

What are evaluation questions?

Person

Evaluation questions focus data collection. They are what our stakeholders need to answer. When they have the answer to these questions, they can tell their stories. As we’ve written, evaluation questions are the high-level questions an evaluation is designed to answer.

Knowing the definition of “evaluation question” is one thing; writing them is another. It can be challenging to write questions at just the right level, that will provide guidance for choosing methods and developing data collection tools, and will actually yield the information to satisfy stakeholders.

Keep these points in mind, and you’ll be off to a good start.

Evaluation questions are informed by the evaluation purpose

Why are you doing this evaluation? Is it to support new policy development? Is it to inform a decision about spreading or contracting a program? Whatever the reason, that purpose will guide the evaluation question development. For example, an evaluation that is intended to demonstrate accountability will likely have an evaluation question around meeting the funder’s requirements.

Write evaluation questions with your stakeholders

Stakeholder engagement is key throughout evaluation projects. Working closely with program leaders and operational staff will ensure that the questions you develop together are the right questions. There is no point in writing what you think are great questions if they don’t meet stakeholders’ needs. Group writing is hard—in your evaluation planning session, don’t worry about getting every word perfect. Make sure you understand the concept that is important, then finesse the language on your own.

Stay open

Evaluation questions should be open-ended (except when they don’t need to be… see our post on why the answer to so many evaluation methodology questions is “it depends”). Open-ended questions give room for a range of possible answers.

  • Close-ended question: Did participants enjoy the program?

  • Open-ended question: How do participants characterize their experience?

See how that second question gives room for a range of responses beyond “yes” and “no”? This second question brings the opportunity for nuanced data that yields deeper insights; that depth is what makes a good evaluation question.

Evaluation questions are not survey questions

Survey questions are very focused, while evaluation questions are broader. Multiple survey questions may be used to answer an evaluation question. If the question you write feels like something you’ve answered before in a survey, you haven’t written an evaluation question. Climb up a level and rewrite.

  • Survey question: How satisfied are you with the timeliness of the email from your support worker?

  • Evaluation question: To what extent are services delivered in a timely fashion?

The data from that survey question can be one of the indicators you use to answer the evaluation question.

Evaluation questions may have multiple indicators

Strong evaluations employ triangulation; that is, multiple views on the same question. One evaluation question may be answered by a combination of two, three or more indicators, relying on multiple methods of data collection.

  • Evaluation question: To what extent is the program having a positive impact on families?

  • Indicators:

    • Parents’ self-reported ability to attend training classes

    • Youth mental health scores

    • Changes in number of hours spent together each week

Together, this suite of indicators provides more reliable insight into the program’s impact than one indicator alone.

How many evaluation questions?

Well, it depends. For a very comprehensive evaluation of a major initiative, more evaluation questions may be required. You may need fewer questions for a simpler project. A general guideline is between five and seven evaluation questions, but it’s not uncommon to see between three and ten. Remember, every evaluation project is different—the main goal is to ensure that stakeholders’ information needs are met, but we must also consider feasibility. If your capacity to collect data, whether through existing resources or by hiring external help, you will likely need to stick with fewer evaluation questions.

Themes can help

Evaluation questions can be clustered in themes that are relevant to the purpose of the evaluation and the nature of the initiative. For example, my evaluation firm has worked on several healthcare projects that rely on a quality matrix for health. That matrix provides a common language and shared concepts throughout healthcare partners, so we use themes like accessibility and appropriateness to guide our evaluation questions. If your organization has a strategic plan or shared goals, those may be key to guiding your evaluation question development. Or look to other frameworks, like the RE-AIM framework, for inspiration on evaluation question themes.

Edit, edit, edit, then step back

Language matters when writing good evaluation questions. Changing just one word can mean the difference between clarity and ambiguity. Use the writing process that works for you, whether that’s working on paper, consulting with a colleague, or staring each word down until you find the absolute perfect alternative. If you’re a true evaluation nerd, there is immense satisfaction in writing the very best question you can. But remember, perfection is not always possible or practical, and just like that last literature review you wrote, sometimes you just need to call it done and move on.

How do you know when you have it right?

You’ll know you have near-perfect evaluation questions when:

  • Together, their answers will tell a high-level story of the initiative being evaluated

  • You have between three and ten questions

  • The questions cannot be answered by a simple yes/no, or by a number

  • Indicators and methods are already suggesting themselves

  • Your stakeholders (and you!) breathe a sigh of relief when they read them

  • For an extra round of review, try this checklist from the CDC 

What comes next?

After you’ve crafted fantastic evaluation questions, you’ll move on to selecting indicators and data collection methods. In doing so, you may need to revisit your evaluation questions and make minor modifications, or even add or remove questions altogether.


Sign up for our newsletter

We’ll let you know about our new content, and curate the best new evaluation resources from around the web!


We respect your privacy.

Thank you!


 

Written by cplysy · Categorized: evalacademy

May 08 2020

Arts-Based Data Collection Techniques

 

Recently, Jennica Nichols and Maya Lefkowich (of AND Implementation) hosted a Canadian Evaluation Society (CES) webinar about using art as a data collection method. The webinar was fun and interactive and included (you guessed it) hands-on examples of how to use arts-based techniques and how to modify them for an online audience. Without rehashing the entire webinar (CES members can re-watch it here: Using Art in Creative Data Collection and Evaluation), I wanted to share the most salient points and how we, here at Eval Academy and Three Hive Consulting, have and will put them to use. 

Why

Arts-based techniques can be used to get audiences to open-up or explore topics that can be hard to put into words. Jennica and Maya suggested using art-based methods for exploring relational meaning. In other words, they are important when: a) exploring concepts in context is important; b) needing to make connections between two distinct ideas (e.g. how the social determinants of health may mediate a program’s impacts), or; c) exploring emotions or experiences that are hard to put into words. They also noted that arts-based methods allow for many ways of knowing, moving beyond text and words to think about how things are connected in space or time or can be represented in a tactile manner.

Arts-based methods also allow participants to make more spontaneous or out-of-the-box associations between ideas. They push us out of our comfort zones and encourage different forms of expression.

What

Arts-based data collection techniques are inherently participatory methods, involving the artist in the creation and interpretation of data. They are inductive techniques, meaning that they are meant to be used for exploring ideas or describing concepts. These techniques start with observations (the art!) then work with the participants to understand the meanings and conclusions that can be drawn from the art.

There are 5 main arts-based data collection techniques:

  1. Literary (e.g. poetry)

  2. Performative (e.g. interpretive dance, theatre)

  3. Visual (e.g. pictures, collage)

  4. Audiovisual (e.g. film, video)

  5. Multimedia (e.g. graphic novel, art installation)

Multimethod techniques make use of two or more arts-based methods.

In the webinar, Maya and Jennica stressed that arts-based data collection techniques are not art nor art therapy, as they aim to answer specific questions and take the information outside of the data collection space to inform decisions. In arts-based data collection techniques the description or explanation of the art is used as data, rather than the art itself.

How

Like other data collection techniques, arts-based methods require consent from participants. Because participants won’t know what they’ve created and how they feel about it being used before they’ve made it, Maya and Jennica suggest obtaining consent before the data collection begins and again once it is completed. They also suggest creating a clear consent checklist to provide participants options for how their art is used, including a discussion of if/how the participant want to be credited for what they’ve created (authorship). Check out Eval Academy’s information sheets and consent forms in our tools section – they can be downloaded and modified for this! Because the narrative behind the art is what is being evaluated, it is important to present the description alongside the art.

Before diving into creating art, it is important to develop a solid foundation with the participants. Give participants permission to be silly and creative. Maya and Jennica suggested setting the tone from the beginning of the session, tell a joke and set appropriate boundaries. Discuss the purpose of the session, let participants know what is expected of them, and how the art they create will be used. Before starting the activity, provide participants with clear prompts or questions they are to focus on when creating their art and set appropriate amounts of time for each of the activities. Too much time can cause participants to become stressed about adding more details or filling the space. Consider providing visual or auditory reminders of the prompt or question during the session to re-focus participants.

Once participants have created their artwork, the important work begins. Remember, when using arts-based data collection, the narrative or description behind the art is the data we are seeking to collect. Follow up with interviews or focus groups to understand the meaning or outcomes that came from the process of creating. Ask questions to illuminate underlying connections, assumptions, values, or ideas. 

After the process is complete, revisit consent with each of the participants. Check if and how they are ok with you sharing their art and the narrative that goes along with it. Be clear about how and where the information will be shared.

Organization Tips:

  • Make sure you have all the tools you need before your session

  • Don’t assume that participants have access to items such as cameras, markers, glue, or other supplies

  • Prepare your questions and test the timing of your activities in advance

Tips for conducting online sessions:

  • Consider supplying the questions and supplies in advance of the session. Mail participants packages or provide the log in information for online platforms so that people can become familiar with them in advance

  • Build in extra time to orient people to using the online software.

  • Use Zoom polls or break out rooms to encourage reflection

  • Consider whether to follow up in groups or one-on-one; much like deciding between a focus group and an interview, the nature of the data you wish to collect should drive your decision


How We’ve Used Art-Based Techniques

Here at Eval Academy and Three Hive Consulting, we are big fans of using creative approaches in our evaluation practice. Our core values include being creative in our work, both to engage our clients and evaluation participants, and as a way to generate new ideas.

To get un-stuck and re-imagine the evaluation experience for our clients.

Recently, we opened our annual team retreat with an activity designed to help us channel our inner four-year-old to get silly and lower our creative inhibitions. Next, we doodled our way through a visioning exercise to help us re-imagine the evaluation experience for our clients. While a small flood prevented us from completing the second half of the exercise, we gained a pretty clear picture of the barriers to evaluation our clients might face.

As part of focus-groups and workshops.

We’re also a big fan of using the At My Best strengths cards which have pictures on the front and a single word on the back to do photo-elicitation techniques. We’ve used them in workshops to get participants to open up, to help jumpstart the outcome mapping process with program funders, and with health partners to develop an approach for complex patients. Interestingly, in our experiences using these cards, in every session at least one person can’t get the idea of using the photo and must flip the card over to use the words.

To understand the impacts of a program on children.

We used visual data collection methods in a feedback session with children and youth, allowing participants to give visual and verbal feedback. Children rotated through a series of flip charts with a question posed at the top. Facilitators helped the children interpret the questions and clarified the meanings of the images. One big thing we learned at these sessions were to use washable markers with children.

How we might be using these methods in the future

In a previous article, we explored using virtual reality tools to augment evaluation, including in data collection (check out: Visual Storytelling Though Augmented and Virtual Reality) and I know we are just waiting for the right project to try this out with.

As we may not be meeting in person any time soon, we can use arts-based data collection techniques to better understand our participant’s experiences. Literary, visual and audiovisual methods can create a starting point to capture and understand participants’ stories in today’s virtual world. Because we can’t be in person to build rapport, arts-based techniques can create a common and safe starting point to explore ideas with participants.

And finally, in a recent team-building effort, we took a few hours off to play Pictionary and some other online drawing games as a team. After the fun and games, we noticed a few of us were in creative and out of the box mindsets and had a pile of new ideas. Using arts-based techniques to break out of our routines and explore new ways to approach our evaluation practice will be a trick we continue to use with our team.


Connect with AND Implementation on social media:

Instagram: @andimplementation | Twitter: @and_implement | Website: andimplementation.ca


Sign up for our newsletter

We’ll let you know about our new content, and curate the best new evaluation resources from around the web!


We respect your privacy.

Thank you!


 

Written by cplysy · Categorized: evalacademy

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 263
  • Go to page 264
  • Go to page 265
  • Go to page 266
  • Go to page 267
  • Interim pages omitted …
  • Go to page 304
  • Go to Next Page »

Footer

Follow our Work

The easiest way to stay connected to our work is to join our newsletter. You’ll get updates on projects, learn about new events, and hear stories from those evaluators whom the field continues to actively exclude and erase.

Get Updates

Want to take further action or join a pod? Click here to learn more.

Copyright © 2026 · The May 13 Group · Log in

en English
af Afrikaanssq Shqipam አማርኛar العربيةhy Հայերենaz Azərbaycan dilieu Euskarabe Беларуская моваbn বাংলাbs Bosanskibg Българскиca Catalàceb Cebuanony Chichewazh-CN 简体中文zh-TW 繁體中文co Corsuhr Hrvatskics Čeština‎da Dansknl Nederlandsen Englisheo Esperantoet Eestitl Filipinofi Suomifr Françaisfy Fryskgl Galegoka ქართულიde Deutschel Ελληνικάgu ગુજરાતીht Kreyol ayisyenha Harshen Hausahaw Ōlelo Hawaiʻiiw עִבְרִיתhi हिन्दीhmn Hmonghu Magyaris Íslenskaig Igboid Bahasa Indonesiaga Gaeilgeit Italianoja 日本語jw Basa Jawakn ಕನ್ನಡkk Қазақ тіліkm ភាសាខ្មែរko 한국어ku كوردی‎ky Кыргызчаlo ພາສາລາວla Latinlv Latviešu valodalt Lietuvių kalbalb Lëtzebuergeschmk Македонски јазикmg Malagasyms Bahasa Melayuml മലയാളംmt Maltesemi Te Reo Māorimr मराठीmn Монголmy ဗမာစာne नेपालीno Norsk bokmålps پښتوfa فارسیpl Polskipt Portuguêspa ਪੰਜਾਬੀro Românăru Русскийsm Samoangd Gàidhligsr Српски језикst Sesothosn Shonasd سنڌيsi සිංහලsk Slovenčinasl Slovenščinaso Afsoomaalies Españolsu Basa Sundasw Kiswahilisv Svenskatg Тоҷикӣta தமிழ்te తెలుగుth ไทยtr Türkçeuk Українськаur اردوuz O‘zbekchavi Tiếng Việtcy Cymraegxh isiXhosayi יידישyo Yorùbázu Zulu