• Skip to main content
  • Skip to footer
  • Home

The May 13 Group

the next day for evaluation

  • Get Involved
  • Our Work
  • About Us
You are here: Home / Archives for allblogs

allblogs

Aug 02 2021

Defining evidence use

This blog post is a modified segment of my dissertation, done under the supervision of Dr. Tiffany Berry at Claremont Graduate University. You can read the full dissertation on the Open Science Framework here. The rest of the blog posts in this series on my dissertation are linked below:

  1. Factors that promote use: A conceptual framework
  2. Defining evidence use
  3. Overview of my dissertation study: sample, recruitment, & measures
  4. Question 1: To what extent are interpersonal and research factors related to use? 
  5. Question 2: To what extent do interpersonal factors relate to use beyond research factors?
  6. Question 3: How do researchers and evaluators differ in use, interpersonal factors, and research factors? 

Broadly speaking, researchers and evaluators are all interested in promoting evidence use. However, use is a multifaceted concept, and there are a multitude of frameworks defining different types of use (Nutley et al., 2007). Researchers are often most interested in instrumental use or using evidence directly to make changes in programming or in decision-making (Nutley et al., 2007). However, Weiss (1979) recognized early on that instrumental use did not occur frequently and instead proposed conceptual use, or using evidence to influence one’s thinking or attitudes about the problem (Carol H. Weiss & Bucuvalas, 1980). Some researchers have promoted the idea of a continuum of use that goes from conceptual uses (e.g., increased awareness, knowledge, and understanding) to more instrumental uses (e.g., shifts in attitudes, perceptions, and ideas; changes in practice and policy; Nutley et al., 2007).

With the realization that use is not limited to the findings of research or evaluation, process use—the behavioral (e.g., instrumental) or cognitive (e.g., conceptual) changes after participating in a research or evaluation endeavor (Patton, 1997)—was promoted as another type of use that occurs as a result of participating in the decision-making of a research or evaluation study. Both findings and process use can lead to instrumental and conceptual use (Alkin & King, 2016, 2017).

Other types of use also exist. For instance, there are longer-term, more incremental uses such as influence (i.e., research or evaluation producing effects by intangible or indirect means; Kirkhart, 2000) and enlightenment (i.e., the gradual change of ideas into the organization after a longer time period than conceptual use; Weiss, 1977). There are primarily political uses such as symbolic (i.e., commissioning research or evaluation without intent in applying the results; Leviton & Hughes, 1981), legitimative (i.e., used to justify decisions already made about the organization; Patton, 2008), persuasive (i.e., using evidence to support one’s position; Leviton & Hughes, 1981), and imposed use (i.e., a higher-level authority mandates some form of evidence use; (Carol Hirschon Weiss, Murphy-Graham, & Birkeland, 2005). Use can also not occur (i.e., nonuse) or could be used inappropriately (i.e., misuse) or in unintended ways (i.e., unintended use) (Patton, 2008).

Rather than focusing on “promoting evidence use,” I encourage you to think critically about what type of use you want to happen and how you will promote that particular type of use. For example, in my dissertation study I found that relationship quality was related to instrumental, conceptual, and process use, but relevance was only related to instrumental and conceptual use and commitment to use was only related to process use. This suggests specific factors relate to specific types of use.

Written by Dana Wanzer · Categorized: danawanzer

Jul 30 2021

How to Deliver Bad Results

 

You’ve just designed, implemented, and analyzed a client satisfaction survey. Trouble is: clients are not satisfied. Uh oh. No one likes to deliver bad news.

However, there are some strategies that will help not only to soften the blow, but to make this a rewarding experience.


1. Don’t try to sugarcoat it.

In many cases, organizations hire evaluators so that they can improve, or to learn if what they are doing is hitting the mark. If there’s room for improvement, this is exactly what they’ve hired you for. Hiding bad results is not helping your client. While they may not be thrilled to learn about low satisfaction scores, or that their outcomes are not being achieved, they do need to know it so that they can make changes to their services. It may be tempting to frame data in a positive light:

“27% of clients thought your services were the best!”

But try to think about what would be most helpful to your client, and likely being more direct is the better strategy:

“73% of clients saw room for improvement.”

2. Don’t wait.

No one wants to be blindsided. Chances are, as you begin data analysis you may start to see some clues of negative results. Definitely by the time you are writing the report and working on some data viz you know what the results of the evaluation are. Don’t wait to share these!

Reading about poor outcomes for the first time in a pdf-ed final report may make your client feel defensive. Take away the element of surprise by dropping some hints in your regular meetings; allow them to get past an emotional response.

This will also give you an opportunity to assess their early reactions and think about how to frame their response when you deliver your report. Better yet, engage your client in the report writing process – the stakeholders likely have insights to share. Take a participatory approach to the data analysis and writing to maximize engagement and learning. 

3. Offer some action-oriented steps to help your client improve. 

So, the clientele said they see some gaps in services, or perhaps the service delivery model is not achieving the desired outcomes. Hopefully, your evaluation has uncovered some of the “why,” which can help you to frame some recommendations or lessons learned. What has your evaluation uncovered that will help your client to address those gaps? This is where the value of your evaluation really comes through – the “so what?”. Better Evaluation offers some great tips for drafting recommendations.

4. Supplement with other results.

Perhaps clients weren’t loving the services, and there is room for improvement, but likely you do have some positive results. Perhaps staff are really satisfied in their work, or part of the process is going really well. This approach is a nice balance to the bad news, without sugar-coating. Just because you’ve uncovered some bad results, doesn’t mean that’s all you need to focus on. Highlight the great outcomes as well.

5. Prepare to get lots of questions and time for discussion.

Poor evaluation results, while disappointing, are also interesting. Likely your client will want to know more. Their questions may actually lead to follow-up evaluations. At the very least, it will likely result in some rich discussion.

Be prepared to facilitate these discussions when you present the findings. Consider adding in some additional time to your presentation, and whether other individuals could/should be invited. Hearing from everyone may uncover new insights or point to some next steps for the organization.

Clients may ask a lot of questions. Be careful to answer according to the stories told by the data without bringing in your own biases. You may even want to plan for a follow-up discussion.


As an evaluator, one of our primary roles is to help programs or organizations understand if they are effective. Are they achieving the results they intended to? If they already knew the answer with certainty, they likely wouldn’t have hired you.

So, chances are, from time to time you will come across findings that clients are not satisfied, or a program is not performing well. As an evaluator, you can view those findings as a real opportunity.

While delivering the not-so-great news may not be the best part of your job, helping that group to uncover new insights and make changes based on your findings is very rewarding.

To learn more about applying evaluation in practice, check out more of our articles, or connect with us over on Twitter (@EvalAcademy) or LinkedIn.


Sign up for our newsletter

We’ll let you know about our new content, and curate the best new evaluation resources from around the web!


We respect your privacy.

Thank you!


 

Written by cplysy · Categorized: evalacademy

Jul 30 2021

Preparing for Another Uncertain Year

Preparing for Another Uncertain School Year

I think we had all hoped that after two school years disrupted by COVID-19, the 2021-2022 school year could be a return to “normal.” 

Yet with the Delta variant surging throughout the country and children under 12 still unvaccinated, it’s becoming clear that educators and families are in for another year of uncertainty.

The theme of this summer’s blog posts has been to figure out the meaning behind the data we collect and track – what’s the “so what?”

We’re all wishing that the taste of “normalcy” that we’ve gotten in early summer could stick around –  we’re exhausted from the constant worry and grief that the pandemic has wrought on us as individuals, families, and communities. 

But if there’s one good thing to come from how long we’ve been dealing with the pandemic, it’s this: it’s not an unknown anymore. We have data to help us navigate it.

This is true on the medical and public health sides of the issue, and it’s also true for education. 

Let’s think about all of the information we already have about how to handle whatever comes our way this school year:



  • We know how to keep our schools clean and help students follow good hygiene practices.


  • We know what challenges our families have been facing and how we can engage and support them, even remotely.


  • If we need to, we know how to provide high-quality remote and hybrid learning opportunities for students to keep them engaged.

Plus, we’ve got a ton of documentation to remind us of what we know if we feel overwhelmed. 

We’ve got logs of when and how the building was cleaned, we’ve got past family surveys and chat transcripts/recordings from family Zoom meetings to tell us what their needs and concerns were, and we’ve got lesson plans and other records of the teaching and learning strategies we tried and felt successful with. 

… and many other sources of data and information too!

As we turn our calendars from July to August and get ready to embrace the new year, we can look back on our data from the past two years and figure out what the “so what” was. 

Now is the time to dig deep with the information we do have – even as we await more information about what this year will hold. 

As we look back at the “what” from the past two years, think about the impact that each of those things had:

Which routines and practices helped students comply with safety measures and made the community feel safer? Which really didn’t work?

What were our families’ greatest needs, and how did we work to meet them? Did our supports and referrals help mitigate the challenges they faced?

What did our students really enjoy about remote or hybrid learning? How can we replicate those practices? What strategies really did not work and should be avoided this year?

What did we do in a remote or hybrid environment that we might want to keep when we’re in person full-time because they were really effective for family engagement (i.e., Zoom PTO meetings and conferences)?

With some review of your data and reflection on what those data points actually meant for students, families, and staff, I hope that you’ll have a sense of reassurance.

Even though we are about to enter another uncertain year, our data can teach us a lot about how prepared we actually are.

Written by cplysy · Categorized: engagewithdata

Jul 29 2021

Comment on IRB 101: What are they? Why do they exist? by IRB 101: Risks to Research Participants » RK&A »

[…] my first post in this IRB 101 series, I described what IRBs are and why they exist.  IRBs exist to protect […]

Written by cplysy · Categorized: rka

Jul 29 2021

IRB 101: Risks to Research Participants

In my first post in this IRB 101 series, I described what IRBs are and why they exist.  IRBs exist to protect research participants.  In this second post, I focus on risks to research participants. 

Risk Meter Pointing to Minimal Risk

What are risks to research participants?

Risk is the probability that harm will occur.  All research involves some level of risk to research participants (never say a study has no risk to research participants!).  Most visitor studies research and evaluation can be classified as minimal risk.  Minimal risk is defined in the Common Rule as: “probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests.” Assessing whether your study is of minimal risk requires you to: (a) think about both the probability as well as the magnitude of harm; and (b) consider the probability and magnitude of harm against what a research participant may encounter in everyday life.

What types of risks might research participants face?

The Belmont Report defines types of potential risks to help researchers assess the risk level of their proposed study. Potential risk types include: physical, psychological, social, legal, and economic.  Below are the descriptions of these type of risks.  For visitor studies research and evaluation, risks typically fall within psychological and social risks. It is important to be aware of all types of risks though.

  • Psychological risks can include anxiety, sadness, regret and emotional distress, among others. Psychological risks exist in many different types of research in addition to behavioral studies.
  • Social risks exist whenever there is the possibility that participating in research or the revelation of data collected by investigators in the course of the research, if disclosed to individuals or entities outside of the research, could negatively impact others’ perceptions of the participant. Social risks can range from jeopardizing the individual’s reputation and social standing, to placing the individual at risk of political or social reprisals.
  • Physical risks may include pain, injury, and impairment of a sense such as touch or sight. These risks may be brief or extended, temporary or permanent, occur during participation in the research or arise after.
  • Legal risks include the exposure of activities of a research subject that could reasonably place the subjects at risk of criminal or civil liability.
  • Economic risks may exist if knowledge of one’s participation in research, for example, could make it difficult for a research participant to retain a job or to find a job, or if insurance premiums increase or loss of insurance is a result of the disclosure of research data

How do you weigh risks to research participants against study benefits?

The IRB’s official function is to weigh the risks to research participants against the benefits of the study. There is no clear formula to do so.  Risk assessment requires multiple perspectives and interpretations. That is why IRBs include multiple people with different expertise on a review panel. 

As researchers and evaluators, we are always aiming to minimize risks to research participants.  It is our duty under the principles of the Belmont Report.  From my perspective, visitor studies research and evaluation should always be of minimal risk to participants.  I don’t mean to diminish the importance of museum work. But, I cannot envision a study benefit that would rationalize researchers proposing a study of more than minimal risk.

The post IRB 101: Risks to Research Participants appeared first on RK&A.

Written by cplysy · Categorized: rka

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 171
  • Go to page 172
  • Go to page 173
  • Go to page 174
  • Go to page 175
  • Interim pages omitted …
  • Go to page 310
  • Go to Next Page »

Footer

Follow our Work

The easiest way to stay connected to our work is to join our newsletter. You’ll get updates on projects, learn about new events, and hear stories from those evaluators whom the field continues to actively exclude and erase.

Get Updates

Want to take further action or join a pod? Click here to learn more.

Copyright © 2026 · The May 13 Group · Log in

en English
af Afrikaanssq Shqipam አማርኛar العربيةhy Հայերենaz Azərbaycan dilieu Euskarabe Беларуская моваbn বাংলাbs Bosanskibg Българскиca Catalàceb Cebuanony Chichewazh-CN 简体中文zh-TW 繁體中文co Corsuhr Hrvatskics Čeština‎da Dansknl Nederlandsen Englisheo Esperantoet Eestitl Filipinofi Suomifr Françaisfy Fryskgl Galegoka ქართულიde Deutschel Ελληνικάgu ગુજરાતીht Kreyol ayisyenha Harshen Hausahaw Ōlelo Hawaiʻiiw עִבְרִיתhi हिन्दीhmn Hmonghu Magyaris Íslenskaig Igboid Bahasa Indonesiaga Gaeilgeit Italianoja 日本語jw Basa Jawakn ಕನ್ನಡkk Қазақ тіліkm ភាសាខ្មែរko 한국어ku كوردی‎ky Кыргызчаlo ພາສາລາວla Latinlv Latviešu valodalt Lietuvių kalbalb Lëtzebuergeschmk Македонски јазикmg Malagasyms Bahasa Melayuml മലയാളംmt Maltesemi Te Reo Māorimr मराठीmn Монголmy ဗမာစာne नेपालीno Norsk bokmålps پښتوfa فارسیpl Polskipt Portuguêspa ਪੰਜਾਬੀro Românăru Русскийsm Samoangd Gàidhligsr Српски језикst Sesothosn Shonasd سنڌيsi සිංහලsk Slovenčinasl Slovenščinaso Afsoomaalies Españolsu Basa Sundasw Kiswahilisv Svenskatg Тоҷикӣta தமிழ்te తెలుగుth ไทยtr Türkçeuk Українськаur اردوuz O‘zbekchavi Tiếng Việtcy Cymraegxh isiXhosayi יידישyo Yorùbázu Zulu