• Skip to main content
  • Skip to footer
  • Home

The May 13 Group

the next day for evaluation

  • Get Involved
  • Our Work
  • About Us
You are here: Home / Archives for allblogs / evalacademy

evalacademy

Mar 27 2024

Evaluation Sustainability Plans: Why you need one for your next evaluation project.

 

 

Evaluations are important tools for assessing the effectiveness, efficiency, and impact of programs and initiatives. They provide valuable insights that inform decision-making, resource allocation, and strategic planning. You can discover more reasons why you should evaluate by exploring our infographic here.


However, as organizations increasingly recognize the value of evaluations, ensuring the sustainability of these efforts becomes front of mind to maintain long-term impact and continuous improvement. Sustainability is important for evaluations led by both internal and external evaluators because it ensures that the knowledge gained, and the processes developed can be integrated into the organization’s culture and operations. An Evaluation Sustainability Plan is particularly important for projects that involve long-term initiatives to ensure that outcomes, and measurement of those outcomes, persist beyond the initial evaluation. This is where an Evaluation Sustainability Plan steps in.

What is an Evaluation Sustainability Plan?

An Evaluation Sustainability Plan is a framework created by evaluators to maintain the effectiveness and relevance of evaluation efforts beyond evaluation project completion. It consists of strategies and processes to preserve the evaluation over time, acting as a roadmap for organizations to sustain and benefit from the evaluation for ongoing learning, improvement, and accountability. It aligns continued evaluation efforts with organizational goals, fostering a culture of learning and adaptation. An Evaluation Sustainability Plan enables organizations to integrate evaluation insights into their operations, facilitating continuous growth.


Why Should You Have an Evaluation Sustainability Plan?

1. It Can Support Continuous Improvement

By facilitating ongoing assessment and refinement of programs and initiatives, an Evaluation Sustainability Plan can support organizations to adapt to changing needs, optimize program delivery, and achieve better outcomes over time.

  • A key component of continuous improvement outlined in an Evaluation Sustainability Plan is the commitment to ongoing evaluation activities. By continuing to conduct evaluations at regular intervals, organizations can systematically assess the effectiveness and impact of their programs and initiatives.

  • By outlining post-evaluation activities and metrics for success in an Evaluation Sustainability Plan, organizations can track progress over time and assess the sustainability of outcomes.

  • An Evaluation Sustainability Plan emphasizes the importance of using evidence to guide organizational strategy, program planning, and resource allocation. By synthesizing evaluation findings into actionable recommendations, organizations can make informed decisions.

2. It Highlights Strategies for Internal Capacity Building

An Evaluation Sustainability Plan highlights the importance of building internal evaluation capacity within organizations to ensure the ongoing effectiveness and sustainability of evaluation efforts. To enhance internal evaluation capacity, an Evaluation Sustainability Plan highlights ways to integrate evaluation practices into organizational processes and decision-making frameworks.

  • An Evaluation Sustainability Plan presents a unique opportunity for organizations to leverage the expertise of the evaluator. Evaluators bring a wealth of knowledge, experience, and best practices to the table, gained from working on various evaluation projects across different contexts. Through collaboration and knowledge sharing, organizations can tap into this expertise to enhance their own evaluation capacity. Additionally, evaluators may develop customized tools and processes during the evaluation that can be valuable assets for the organization moving forward. By incorporating these tools into their own evaluation practices, organizations can streamline their evaluation processes and ensure consistency and rigour in their approach.

  • An Evaluation Sustainability Plan advocates for allocating staff time and resources to prioritize evaluation tasks. By designating individuals or teams responsible for leading and coordinating evaluation efforts, organizations can ensure that evaluation activities receive the support needed to be conducted effectively.

  • An Evaluation Sustainability Plan emphasizes the importance of providing training and skill development to staff involved in evaluation activities. By offering ongoing training and skill development opportunities on topics such as evaluation planning, methods, ethics, data collection and analysis, and reporting and utilization of evaluation findings, organizations can support the competencies and confidence of staff conducting evaluations internally. Take a look at our latest article on 12 Training Ideas Beyond Conventional Evaluation.

 

3. It Can Support Transparent Communication and Accountability

In any evaluation, transparent communication and accountability are key elements that drive trust, facilitate learning, and ensure the utilization of evaluation findings. An Evaluation Sustainability Plan underscores the importance of these principles to guide organizations in fostering a culture of transparency, accountability, and continuous learning.

  • An Evaluation Sustainability Plan highlights the necessity of sharing evaluation findings regularly with partners. Transparent communication of evaluation results ensures that partners and other relevant parties are kept informed of program progress, outcomes, and any emerging insights.

  • Engaging partners throughout the evaluation process is crucial for promoting buy-in, transparency, and accountability. An Evaluation Sustainability Plan outlines strategies for soliciting feedback, analyzing data, and incorporating lessons learned into program planning. This includes mechanisms such as establishing advisory committees and conducting partner consultations to gather diverse perspectives. By actively involving partners, organizations can demonstrate a commitment to responsiveness and ensure evaluation efforts remain relevant and credible.

  • An Evaluation Sustainability Plan guides communication and dissemination strategies to ensure that evaluation findings reach and resonate with intended audiences. By employing targeted and tailored communication strategies, organizations can maximize the impact and reach of evaluation findings, fostering a culture of transparency, accountability, and continuous learning.

 

4. It Presents a Plan for Adapting the Evaluation to Changing Contexts

In today’s rapidly evolving landscape, organizations must remain agile and responsive to shifting needs, priorities, and contextual factors. An Evaluation Sustainability Plan can serve as a strategic tool to equip organizations with the flexibility to adapt their evaluation approaches, methodologies, and indicators to effectively address emerging challenges and opportunities.

  • An Evaluation Sustainability Plan ensures that evaluation efforts remain aligned with organizational goals and priorities, even as contexts evolve. By encouraging updates to logic models, the evaluation purpose, scope, and questions, and refining data collection processes, an Evaluation Sustainability Plan can ensure that evaluation activities continue to generate actionable insights that contribute to achieving organizational objectives.


Key Components of an Evaluation Sustainability Plan:

An Evaluation Sustainability Plan should be tailored to each organization and program, meaning its components may vary depending on the specific evaluation it is designed to support. However, we have identified several key components that we have found useful below:

  • State Clear Objectives and Outcomes:

    • Define the purpose, scope, and intended outcomes of the sustainability plan, aligning them with the organization’s mission, goals, and priorities.

  • Include a Partner Engagement Strategy:

    • Identify key partners, their roles, and responsibilities in sustaining evaluation efforts.

    • Develop suggested strategies for ongoing engagement, communication, and collaboration throughout the evaluation lifecycle.

  • Suggest Capacity Building Initiatives:

    • Assess and address organizational capacity gaps related to evaluation planning, implementation, and utilization.

    • If possible, provide training, resources, and support to enhance staff skills and competencies in evaluation methodologies and techniques.

    • Suggest the allocation of dedicated staff time and resources for evaluation tasks.

  • Present a Knowledge Management Framework:

    • Establish suggested mechanisms for capturing, documenting, and disseminating evaluation findings, lessons learned, and best practices.

  • Establish Monitoring and Evaluation Standards:

    • Define indicators, benchmarks, and monitoring mechanisms to track the sustainability of outcomes over time.

    • Suggest periodic reviews and assessments of these measurements to ensure they remain aligned and adjust strategies as needed.

  • Consider Resource Allocation and Sustainability Financing:

    • Suggest the allocation of financial and staff resources to support evaluation activities, including investing in necessary tools, software, and professional development opportunities for staff involved in implementing the evaluation.


Key Considerations when Developing an Evaluation Sustainability Plan:

  • Bias:

    • Bias may arise if the individuals responsible for implementing the program are also involved in collecting, analyzing, or interpreting the evaluation data. This can skew the results and compromise the integrity of the evaluation. To minimize the likelihood of bias, programs can promote transparency throughout the evaluation process by clearly documenting roles and decision-making procedures, involve partners from diverse backgrounds and perspectives, and provide bias awareness training to evaluation team members.

  • Data Quality and Integrity:

    • Poor data quality, such as incomplete or inaccurate data, can undermine the credibility and reliability of evaluation findings. Ensuring all staff involved in evaluation processes are trained in the principles of data quality is important to maintain the integrity of the evaluation. Implementing quality assurance measures, such as regular data audits and validation checks to assess data accuracy, completeness, and consistency, can also help to identify and correct any errors or discrepancies in the data


I believe that, where possible, an Evaluation Sustainability Plan should be a critical component of any evaluation project, ensuring that the investments made in evaluations yield lasting benefits and impact. By strategically planning for sustainability from the outset, organizations can maximize the value of evaluations, enhance organizational learning, and drive continuous improvement.


Have you created an Evaluation Sustainability Plan before? Let us know the key components you’ve included in the comments below!

Written by cplysy · Categorized: evalacademy

Mar 25 2024

(Mostly Free) Resources for Learning How to Code Qualitative Data

 

 

What is coding for qualitative data?

If you’ve found your way to this article, you probably have an idea of what coding for qualitative data looks like. Hint: It doesn’t require knowing Python, C++ or any other programming language.

Qualitative coding is a systematic process of labelling and organizing qualitative data. It is a way to analyze non-numerical data like interview and focus group transcripts, photographs, and field notes. 

In my current role as an evaluator, I usually use coding as a way to identify common and interesting themes from interviews I’ve conducted. These themes are then examined as a whole, to see what kind of narrative insights they can provide about the program that is being evaluated. If you want to know more about how to analyze qualitative data thematically, check out our Eval Academy article: Interpreting themes from qualitative data: thematic analysis.


How do you do it?

There are a lot of different ways to code qualitative data. In the past, I’ve used paper and a pen, sticky notes, MS Word, Excel, or a software program like Nvivo or Dedoose. I usually code my data thematically and inductively because it helps me to uncover unexpected insights. This is because inductive coding involves creating codes as you are going through the data. It forces me to keep an open mind about what the data could be saying – even if it differs from my preconceptions. I’ve also done deductive coding before, starting my coding with a set of agreed upon codes. I find that this method usually helps me to look for the answers to my evaluation questions more efficiently and can be useful when I don’t have a lot of time for analysis.

When I first started learning how to code qualitatively, it was a bit overwhelming because there are so many ways that you can do it. I really struggled to understand that there isn’t necessarily one “right way” of coding in evaluation. There are some general rules for rigour but beyond that, everyone seems to have their own preferred style. This approach slightly differs from academic institutions where you are often required to pick an established qualitative method with a specific underpinning theory, stick with it, and document your steps for review.


In my struggle to find answers about how to code qualitative data, I came across some resources that helped me learn a bit more about the theories behind qualitative coding and how others do it. These resources continue to help me to refine my coding processes. I hope you find them useful as well!

And if you still have questions after exploring my resource list, I recommend asking other evaluators and researchers about their methods or taking a course with a practical component.

You’re also welcome to leave a question or a comment on our Eval Academy LinkedIn!


(Mostly Free) Resources List for Learning Qualitative Coding

Most of these resources include some kind of step-by-step process for coding qualitative data. Some of them also include information on the different types of qualitative coding and when to use them.


Courses:

Delve’s Free Qualitative Data Analysis Course (mostly free)

Delve has created a free course on qualitative coding to promote their paid coding platform. This is a short, self-paced course suitable for beginners. I like that it guides you through coding for the first time with short, practical assignments. You don’t need to use their software to complete the course, but you can trial it for free if you want to use it for your learning.  

Qualitative Research Methods: Data coding and Analysis (mostly free)

MITx online offers free access to this self-paced course which is a shortened version of a semester long version taught by Professor Susan Silbey of MIT. The paid version allows you to participate in the assignments and receive a certificate upon completion. The free version still allows you to access all the content, as long as you sign in with an MITx online account. I found this course to be a really good study of step-by-step qualitative coding within an academic setting. Professor Silbey does a great job of explaining and demonstrating things like how to do line-by-line coding, create a codebook, and refine your codes.


Videos:

Qualitative Data Analysis 101 Tutorial: 6 Analysis Methods + Examples (free)

This is a 25-minute informational video by Grad Coach on YouTube about the different types of qualitative analysis. This is NOT a step-by-step guide to coding, but it does explain 6 different types of qualitative methods and when to use them. This is a good video to watch to learn about what kinds of qualitative methods exist outside of the ever-popular thematic analysis.

Qualitative Coding Tutorial: How to Code Qualitative Data for Analysis (4 Steps + Examples) (free)

This 27-minute YouTube video by Grad Coach explains the minute details of how to code qualitatively. It goes over some steps for how to code, as well as discussing different methods that you can use at each stage of coding. It ends with some tips for how to code your data.

Qualitative data analysis – Coding Tutorial – Initial Codes | “From Codes to Themes” episode 1 (free)

This video is the first part of a YouTube series on how to code by Dr. Kriukow. He’s sort of a qualitative data analysis influencer – if that is a thing. In this 23-minute video, he explains his thought process while he demonstrates how to code a transcript. If you ever wanted to know how other people code, this one is a good demonstration to watch. If you like the way that he codes, I think he has a paid course on Udemy. I’ve never taken it before, so I didn’t include it in this list.

Qualitative coding and thematic analysis in Microsoft Word (free)

MS Word is probably one of the most accessible ways to code because it is an app that most people already have on their computers. Dr. Kriukow shows you how to use MS Word to code and thematically analyze your data in this 28-minute YouTube video.

Ten Top Tips in Qualitative Data Analysis for New Researchers – Jude Spiers (free)

The International Institute for Qualitative Methodology hosted a master class webinar series, and this 1-hour lecture was part of it. This video is less practical than the other resources listed here, but it does offer some useful tips and tricks for how to code qualitative data in an academic setting.


Articles:

Interpreting themes from qualitative data: thematic analysis (free)

This Eval Academy article is one of our most popular. It’s a thorough guide on how to do thematic analysis, including a useful illustration on interpreting themes. Most of the other resources in this list focus on coding, but this one focuses on what you do AFTER coding all your data.

Using thematic analysis in psychology (mostly free)

The authors of this academic article on thematic analysis are well-known researchers of qualitative data methods. If you’re looking for some peer-reviewed literature on how to conduct thematic analysis, you should definitely read this one. It includes step-by-step explanations on how to conduct thematic analysis. This article may be paywalled on some sites.

The Essential Guide to Coding Qualitative Data (free)

Alongside their free course, Delve also has a free guide to coding qualitative data. It discusses a range of useful topics such as how to transcribe interviews, tools for coding qualitative data, and a step-by-step process for coding.

Analyzing Qualitative Data (free)

Learning for Action wrote this step-by-step article with tips for how to analyze qualitative data. Their example uses Excel to code the qualitative data, so it is a useful guide for that specific type of coding method.


What are your favourite resources for learning how to code qualitative data? Let us know in the comments below!

Written by cplysy · Categorized: evalacademy

Mar 25 2024

The Frustration of Searching for Evaluation Content

 

 

If you are an evaluator, or someone interested in learning more about evaluation, you might have experienced the frustration of searching for evaluation-related content online. The word evaluation is used in so many different contexts and industries, that it can be hard to filter out the noise and find the information you’re looking for.

The term ‘evaluation’ is a common denominator in numerous fields. From education to healthcare, from technology to arts, ‘evaluation’ is a universal process of assessing, measuring, and judging. It’s a critical component of decision-making, improvement, and advancement in any industry. In education, we evaluate students’ performance. In healthcare, we evaluate patients’ health conditions. In technology, we evaluate software performance. In arts, we evaluate the aesthetic appeal of a piece. The list goes on. The omnipresence of ‘evaluation’ in our vocabulary is a testament to its importance, but it also creates a significant challenge when searching for specific ‘evaluation’ content online.

I recently started following #evaluation on some social media accounts. Here’s some of the content I get that’s not at all about the professional development or learning opportunities I’d hoped for:

 

 

The problem gets even worse if you try to do some job searching. Lots of people have “evaluation” of something in their job description. Case in point:

And if you happen to be an evaluation consultant looking for RFPs for evaluation contracts it can be nearly impossible. Nearly every RFP in every field makes mention of the “RFP evaluation process”, thus wiping out your keyword search term in one fell swoop!


How can we improve the searchability of evaluation content?

As evaluators, we can do our part to improve the searchability of evaluation content online. Here are some suggestions:

  • Use specific and descriptive keywords when creating or sharing evaluation content, such as “program evaluation”, or “outcome evaluation”  

  • Use hashtags, tags, or categories to label and organize evaluation content on social media platforms, blogs, or websites

  • Join and follow online communities, networks, or groups that are dedicated to evaluation  such as your local Evaluation Association

  • Subscribe to newsletters, podcasts, or blogs that feature evaluation content, (hint: have you signed up for our Newsletter? Scroll to the bottom of this page to sign-up!)

  • Attend webinars and workshops by evaluation experts

  • Share and recommend evaluation content that you find useful, interesting, or relevant with your colleagues, friends, or followers

  • Follow your favourite evaluators on social media!


I’ve spent a bit of time in my career explaining to clients that evaluation is not the same as research, and yet sometimes searching for “research” content may generate better results than “evaluation” content. It seems unfair!

As more evaluation associations offer credentialing and add to the professionalization of our world, it may become easier to find evaluation. More universities and colleges are offering programs directly in evaluation, which further adds credibility to the role and field. I heard somewhere that evaluation is the fastest growing field that no one has heard of. Perhaps as evaluation moves more into the spotlight, searching for content will be easier.


What is your solution for finding quality evaluation content? Let us know in the comments below!

Written by cplysy · Categorized: evalacademy

Mar 25 2024

Playing the Fool: Why Asking a Few Silly Questions Makes You a Better Evaluator

As evaluators, it’s our job to ask questions. When I tell people about what my company does, I tell them we help organizations that do good, to do better, by asking the right questions and answering them. Through asking and answering questions, we frame our evaluation projects, unearth data, and share helpful insights and recommendations.

Many of us arrive in evaluation because of our penchant for asking questions. And even though we’ve all been told “there are no dumb questions,” sometimes it feels like we really should know the answer already. It can be intimidating to speak up and admit your lack of knowledge or understanding.

But evaluators need to get comfortable asking questions, silly or not. Ever heard the term “playing the fool?” The idea behind that phrase is that if someone behaves in a silly way, or repeatedly asks silly questions, they’ll be seen as, well, silly. They won’t be taken seriously. What evaluator, whether an internal employee or contracted consultant, wants to look silly? Well, there are some important reasons to consider asking some silly questions at your next evaluation meeting.


Michael Quinn Patton has likened the role of the evaluator to that of the court jester, also called a fool. The jester’s role in English courts was to entertain, and they had the special privilege of being immune from punishment for what they said. Their unique role enabled them to question, to bear bad news, or to present new or unwelcome perspectives without fear of reprisal. Personally, I’ve latched on to this metaphor and love the idea that evaluators can occupy a special position, speaking truth to power without (too much) worry for their position.


Here are a few situations where thinking of yourself as a court jester, playing the fool, can help you to be a great evaluator.

1. When understanding how an initiative operates

As an outsider, you probably don’t have all the details of the initiative you’re evaluating. Even if you are an internal evaluator, there may be some aspects of a program that you’re not familiar with, or don’t make sense. By positioning yourself as a person in need of educating, you create an environment in which those important details can be shared. In a project kick-off meeting, I like to say very directly that I will probably ask some silly questions and ask the team to help me learn.

2. When certain questions aren’t being asked, but should be

In setting the context where I’m seen very clearly as a non-expert, I can also query the “why” behind puzzling aspects that other team members can’t safely ask about – but may very well also be questioning. After all, I, this silly outsider, shouldn’t understand why a process was set up in the way it was, or how a decision was made. But an internal team member who may also be wondering the same thing doesn’t necessarily have the same psychological safety to ask that very question. By playing the fool here, I can ask about that “elephant in the room” that nobody seems to be addressing.

3. When the group needs to know it’s safe to query

Not every workplace operates with psychological safety. In some settings, the organizational culture is such that people are afraid to fail, or they have a very realistic concern that their job would be at risk if they asked too many questions. An evaluation project requires trust and honesty; when those qualities are absent from an evaluation, the project risks being able to fulfill its purpose. As an evaluator, you can be very intentional about modelling question-asking and encourage the team you’re working with to also speak up when something doesn’t make sense.

4. When important but unwelcome insights need to be shared

We always hope that evaluation projects are driven by a true desire to learn and improve. That’s not always the case, though. As much as we prepare clients for the possibility of both positive and negative outcomes, and potentially scary recommendations, they’re not always ready to hear those findings. I note that with great empathy – as a business owner myself, I wouldn’t be super excited to hear that things aren’t going well and major adjustments are needed, either. It’s tough to hear that you may have been misdirecting your efforts. But in the best interests of all, those insights do need to be addressed. You can lead with your inner fool by being curious, vulnerable, and creating that safe space for receiving unwelcome news. A bit of well-placed humour can help to reduce defensiveness and bring a bit of joy to an otherwise dismal event.


Now don’t take this imagery too far! You don’t need to intentionally appear less intelligent than you are, and you don’t need to wear a goofy hat with bells on it. I’m not suggesting that you ask outright stupid questions (not that you would anyway). Nobody said that the court jester was dumb – far from it, that royal fool was full of crafty insights and clever language (just like you!). You know that your “silly questions” are actually a very intentional way to get people talking about the important things. As a skilled court jester, you’re carefully navigating humility and professionalism to create a safe environment for constructive conversation. And your evaluation project will be better for it.

Written by cplysy · Categorized: evalacademy

Mar 04 2024

Evaluating for Spread and Scale

This article is rated as:

These two words go together so nicely, “spread and scale”; they kind of roll off the tongue. I wonder how many of us would struggle to define or, perhaps more importantly, distinguish the two.

Earlier in my career, I was part of a team looking to publish an article about a quality improvement initiative. Throughout the article, I discussed measurement and evaluation for “spread and scale”. One of the journal reviewers challenged me, “Do you actually mean both spread and scale? How are you measuring for each of these?” It was then that I realized I hadn’t put much thought into what this almost-one-word phrase “spreadandscale” actually meant!

I think this is relevant to evaluation because many clients are interested in evaluating for spread OR scale, OR both. It’s important that we understand the differences and how to support evaluation to guide their decision-making and actions.

So, let’s start with definitions.


Spread is to replicate in a different location. Think horizontal flow. You may pilot a new program at one site and, upon its success, spread it to your other sites, but the implementation of that program is essentially the same.

Example: One ward in a hospital trials a new process for patient care. After its success, another ward in the hospital implements the new process. The process has spread to another department; it is being implemented in a new location, and likely with a new population, but the implementation is the same.


Scale is to build the infrastructure for implementation at a new higher level. Think vertical flow. If you want to implement a new process across an entire system, you would need to embed new policies, build training opportunities, set accountabilities, etc.

Example: A healthcare system wants to use a new record system; they need to determine what hardware and software procurement is required and how to train the staff (possibly a train the trainer model) and update all policies related to record keeping.


It’s easy to get confused. It’s possible for a program to employ both spread and scale.

Example: A healthcare system pilots a new system in one hospital. After its success they spread to all hospitals and develop accompanying policies and protocols to embed the new system across the entire health system, thereby scaling the pilot.

To further add to our confusion, spread and scale DO have similarities:

  • both are expansions

  • both often follow a pilot or trial

  • both are important for quality improvement efforts

  • both can be the reason for an evaluation!

The key, for me, is to look for that policy change or system-level change that is foundational to scale. Is the program intending to embed the program into its new way of working across all sites, or are they spreading to a few sites where they think this might also be a value-add?

So, what does all this mean for evaluation?


If you’re evaluating a program that intends to spread:

  • Focus on fidelity. Review the implementation plan and then evaluate what actually happened. Understanding the variance will help this program spread successfully. Questions to ask may include:

    • Was the program implemented as intended (Pro tip: the RE-AIM framework might come in handy here!)

    • What worked well, and what didn’t?

    • How did the context/environment play a role?

  • Identify what changes or adaptations were required for implementation. Understanding necessary changes develops a sort of pre-requisite list that can be used to determine if implementation in other sites or with other populations is likely to be successful. Implementation Science is likely your friend here, identifying key domains for implementation, including intervention characteristics, communication processes, readiness for change, planning and execution. Some questions to ask may include:

    • What staff/human resources are required?

    • How did responsibilities/accountabilities change?

    • What are the key barriers or enablers for implementation?

  • Of course, program effectiveness is still important. There’s a good chance an evaluation is being completed to determine if the program should spread. In that case, key outcome evaluation questions are very relevant:

    • To what extent is the program achieving what was intended?


If you’re evaluating a program that intends to scale:

  • Identify policy and system-level changes. Scaling a program requires that changes be embedded at the system level. These could be fiscal/financial enablers, network/relationship enablers, or environmental enablers. Identifying these changes, or at least the plan for these changes will help to determine if scale is likely to be effective.

  • Identify accountability structures and staff capacity. Scaling a program may require new roles, new training, new management structures or even entire new teams to oversee the program. Identifying these changes, or at least the plan for these changes will help to determine if scale is likely to be effective.

  • Effectiveness: is the program achieving what it intended? As with spread, determining program effectiveness is still a foundation for deciding whether the program should be scaled in the first place.


In my experience, most program evaluations or pilots are looking for spread; they want to make a small investment to test a new program or process with a smaller group to determine if it should eventually be spread to other sites or departments. An evaluation for spread may involve a formative assessment (how is the pilot implementation going?), an outcome evaluation of a pilot (what was achieved), and maybe even an evaluation of the spread itself. There are lots of opportunities for an evaluation!


Many validated tools can help you assess readiness for spread. It’s not a literature base I’m very familiar with. If you have a favourite, share it with me in the comments below!

 

Written by cplysy · Categorized: evalacademy

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 5
  • Go to page 6
  • Go to page 7
  • Go to page 8
  • Go to page 9
  • Interim pages omitted …
  • Go to page 43
  • Go to Next Page »

Footer

Follow our Work

The easiest way to stay connected to our work is to join our newsletter. You’ll get updates on projects, learn about new events, and hear stories from those evaluators whom the field continues to actively exclude and erase.

Get Updates

Want to take further action or join a pod? Click here to learn more.

Copyright © 2026 · The May 13 Group · Log in

en English
af Afrikaanssq Shqipam አማርኛar العربيةhy Հայերենaz Azərbaycan dilieu Euskarabe Беларуская моваbn বাংলাbs Bosanskibg Българскиca Catalàceb Cebuanony Chichewazh-CN 简体中文zh-TW 繁體中文co Corsuhr Hrvatskics Čeština‎da Dansknl Nederlandsen Englisheo Esperantoet Eestitl Filipinofi Suomifr Françaisfy Fryskgl Galegoka ქართულიde Deutschel Ελληνικάgu ગુજરાતીht Kreyol ayisyenha Harshen Hausahaw Ōlelo Hawaiʻiiw עִבְרִיתhi हिन्दीhmn Hmonghu Magyaris Íslenskaig Igboid Bahasa Indonesiaga Gaeilgeit Italianoja 日本語jw Basa Jawakn ಕನ್ನಡkk Қазақ тіліkm ភាសាខ្មែរko 한국어ku كوردی‎ky Кыргызчаlo ພາສາລາວla Latinlv Latviešu valodalt Lietuvių kalbalb Lëtzebuergeschmk Македонски јазикmg Malagasyms Bahasa Melayuml മലയാളംmt Maltesemi Te Reo Māorimr मराठीmn Монголmy ဗမာစာne नेपालीno Norsk bokmålps پښتوfa فارسیpl Polskipt Portuguêspa ਪੰਜਾਬੀro Românăru Русскийsm Samoangd Gàidhligsr Српски језикst Sesothosn Shonasd سنڌيsi සිංහලsk Slovenčinasl Slovenščinaso Afsoomaalies Españolsu Basa Sundasw Kiswahilisv Svenskatg Тоҷикӣta தமிழ்te తెలుగుth ไทยtr Türkçeuk Українськаur اردوuz O‘zbekchavi Tiếng Việtcy Cymraegxh isiXhosayi יידישyo Yorùbázu Zulu