• Skip to main content
  • Skip to footer
  • Home

The May 13 Group

the next day for evaluation

  • Get Involved
  • Our Work
  • About Us
You are here: Home / Archives for allblogs

allblogs

Jul 24 2022

Let’s improve the use and influence of evaluations

 We revisit Betterevaluation’s «Seven Strategies for Improving Evaluation Use and Influence» where (the peerless) Patricia Rogers tells us What can we do to support the use of evaluation? How can we support the constructive use of the findings and evaluation processes?

This is a long-standing challenge. Some may object that the use of evaluations has been the focus of discussion for more than 40 years, and it’s not something that is currently fully working. This will continue for a few more years if we are not aware of the challenges and assume the correct mitigation strategies.

That’s what the Patricia Rogers’ list of strategies for improving evaluation use can tell us about:

1. Let’s Identify the intended users and intended uses of the evaluation from the start.

When we identify the intended users, let’s be as specific as possible, and let’s be clear about who the main intended evaluation users are.

2. Let’s anticipate barriers to use. Some examples:

  • The credibility and perceived relevance of evaluation reports,
  • The resources and authority to make changes in response to the findings, and
  • The openness to receiving negative findings (that a program is not working or is not being implemented as intended).

3. Let’s Identify the key processes and times when findings are needed

Use is easier if we identify the key decision points and processes, and then the timing of the evaluation reports and activities need to be organized around them.

4. Let’s select appropriate evaluation report formats, adjusted to each audience and guarantee their accessibility. There is no excuse for this as, currently, different innovative & effective ways of reporting results are well documented

5. Let’s actively and visibly monitor what happens after the evaluation.

  • Management response to the findings, which can then be included in an evaluation report.
  • Tracking responses to recommendations, including whether or not they have been implemented (and how) if they have been accepted.
  • A transition process from an external evaluation that produces findings to internal processes that support change.

6. Let’s ensure that there are adequate resources to support follow-up activities and the development of additional knowledge products.

  • Incorporate a theoretical number of days for the evaluator to continue contributing after the final report.
  • Fund a subsequent project that produces additional knowledge products or works with people to think about the specific implications of the findings for their practice.
  • Allocate the time for internal staff to carry out these activities as part of their role in the evaluation.

7. Let’s document these evaluative use strategies in a formal communication plan, and update it as needed.

Let us promote the use of our evaluations, being aware of the challenges and assuming explicit strategies to face these challenges.

Let’s: (1) Clarify the uses/users (audience), (2) Know the barriers to use, (3) Know the moments key to use, (4) Explore different communication formats, (5) Do active post-evaluation follow-up and (6) secure resources for it, and (7) Design an evaluation communication plan.

We need to be aware of the evaluation use barriers to mitigate them.

Written by cplysy · Categorized: TripleAD

Jul 23 2022

Conscientes de las barreras para el uso evaluativo

Fuente

Fuente

Revisamos de nuevo Estrategias para mejorar el uso y la influencia de la evaluaciones (I y II) en torno a lo presentado por Betterevaluation en «Siete estrategias para mejorar el uso y la influencia de la evaluación«, donde (la sin par) Patricia Rogers nos cuenta ¿Qué podemos hacer para apoyar el uso de la evaluación? ¿Cómo podemos apoyar el uso constructivo de los hallazgos y los procesos de evaluación?

Este es un desafío desde hace mucho, demasiado tiempo: el uso de la evaluación ha sido el centro de discusión durante más de 40 años (iii 40 iii). Seguirá sin duda por algunos más si no somos conscientes de los retos para el uso y asumimos estrategias de mitigación.

De eso trata esta lista de estrategias para mejorar el uso de las evaluaciones:

1. Identifiquemos l@s usuari@s previst@s y los usos previstos de la evaluación desde el principio

Cuando identifiquemos a l@s usuari@s previst@s, seamos lo más específicos posibles, tengamos claro quiénes son l@s principales usuari@s previst@s.

2. Anticipemos las barreras de uso. Algunos ejemplos:

  • La credibilidad y la relevancia percibida de los informes de evaluación,
  • Los recursos y la autoridad para realizar cambios en respuesta a los hallazgos, y
  • La apertura a recibir hallazgos negativos (que un programa no funciona o no se está implementando según lo previsto).

3. Identifiquemos los procesos clave y los momentos en los que se necesitan hallazgos (incluyendo la serie de ciclos de análisis e informes).

Debemos identificar los puntos de decisión y los procesos clave, y el calendario de los informes de evaluación y las actividades deben organizarse en torno a ellos.

4.Seleccionemos formatos de informes de evaluación adecuados, ajustados a cada audiencia y garanticemos su accesibilidad

No hay excusa porque ya están muy documentadas diferentes formas innovadoras y eficaces de informar los resultados.

5.Hagamos un seguimiento activo y visible de lo que sucede después de la evaluación.

  • Respuesta de la gestión a los hallazgos, que luego se puede incluir en un informe de evaluación.
  • Seguimiento de las respuestas a las recomendaciones, incluyendo si se han implementado o no (y cómo) si se han aceptado.
  • Un proceso de transición de una evaluación externa que produce hallazgos a procesos internos que apoyan el cambio.

6.Aseguremos que haya recursos adecuados para apoyar las actividades de seguimiento y el desarrollo de productos de conocimiento adicionales.

  • Incorporaremos un número teórico de días para que el evaluador participe después del informe final.
  • Financiemos un proyecto posterior que produzca productos de conocimiento adicionales o trabaje con personas para pensar en las implicaciones específicas de los hallazgos para su práctica.
  • Dotemos el tiempo al personal interno para realizar estas actividades como parte de su rol en la evaluación.

7.Documentemos estas estrategias de uso evaluativo en un plan formal de comunicación y difusión, y actualizarlo según sea necesario.

Fomentemos por tanto el uso de nuestras evaluaciones, siendo conscientes de los retos y asumiendo estrategias explicitas para enfrentar esos retos: (1) Clarifiquemos los usos/usuarios (audiencia), (2) Conozcamos las barreras al uso, (3) Sepamos los momentos clave para el uso, (4) Exploremos diferentes formatos de comunicación, (5) Hagamos un seguimiento activo posterior a la evaluación, (6) aseguremos recursos para ello y (7) Diseñemos un plan de comunicación de la evaluación.  

i Este es el reto de este viaje de larga distancia: Seamos conscientes de las barreras para el uso evaluativo, para mitigarlas i

Si, sabemos que no es fácil, pero estamos aquí porque no es fácil. Si no, estaríamos haciendo otras cosas. Yo me imagino escribiendo otro tipo de informes, que también cuentan, historias, novelas y relatos cortos.

Written by cplysy · Categorized: TripleAD

Jul 20 2022

The Reporting Revolution – Finding Your Audience

Two weeks ago I shared the first 20 pages of the book I’m writing. Now it’s time to share Chapter 2.

In case you missed the last post, the book I’m writing is called The Reporting Revolution: A little book for researchers and evaluators who give a sh*t.

Chapter 2 was an accident.

Originally I thought I was writing what will eventually be Chapter 3 (re: modern reporting strategy). But the more I wrote, the more I felt as though finding an audience really deserved it’s own space.

The download below will give you both Chapter 1 & Chapter 2.

I’m offering the ebook for free as a I write it because I want it to be really good. But in order for that to happen, I need feedback. So if you download it it, please read it and let me know what you think.

Here is what’s inside currently as of July 20:

  • Chapter 1. The Big Why
    • Why are we still reporting like it’s 1999?
    • Our reports tell everyone else a story about our profession.
    • Seeing our work through our audience’s eyes.
    • Unintentional gatekeepers.
    • Mindset change – Noun report to verb report.
    • Not just better, faster too.
    • Make it easy.
  • Chapter 2. Finding Your Audience
    • Who is in your audience?
    • Activity: Naming your Audience
    • Your Big 3 Audiences
    • Activity: Three Bucket Audience
    • The Audience Growth Saturation Point
    • Audience Reach Splash Model
    • Measuring your Audience
    • Audience Building or Serving?
The Reporting Revolution: A little book for researchers and evaluators who give a sh*t.  
Screenshot of the eBook landing page.
Click here to go to the download page.

Written by cplysy · Categorized: freshspectrum

Jul 17 2022

How do I use the Kirkpatrick Model in Evaluation?

Kirkpatrick is probably one of those names/methods you’ve heard about in your evaluation career, but have you ever used it? I’m surprised how many evaluators I talk to that haven’t because I find it pretty useful and straightforward with tonnes of resources to support you. 

I love experiential descriptions, that is, reading about how someone else applied a method in a real-world scenario: the ups and downs, the backtracking and lessons learned.  

So, having used Kirkpatrick a handful of times on a few initiatives, here is my account of how to use the Kirkpatrick model in your evaluation planning, implementation, and reporting.  


What is the Kirkpatrick Model?

The Kirkpatrick model was originally developed in the 1950s but gained popularity in the 1970s as a way to evaluate training programs. Donald Kirkpatrick proposed 4 levels:

Kirkpatrick can apply to evaluating any type of educational endeavour – where participants or attendees are intended to learn something and implement those learnings.


How do I use the Kirkpatrick Model in Evaluation?

Step 1: Do some basic research. 

The model has spawned a website of Kirkpatrick Partners. They host training and events; they have a newsletter, blog and resources to support your use of Kirkpatrick. You can even get certified in using Kirkpatrick. (Disclaimer: I am not certified.) My goal for the rest of this article will be to show you how I’ve actually used Kirkpatrick and some of my thoughts along the way. 

Step 2: Incorporate Kirkpatrick in your evaluation plan. 

Unless you are exclusively evaluating a training program, I’ve found that Kirkpatrick is often a part of the evaluation plan, but not the only part. 

I’ve used Kirkpatrick on an initiative that trained people to facilitate quality improvement in primary care practices. I used Kirkpatrick for the training but had lots of other evaluation questions and data sources for the quality improvement efforts and outcomes in the primary care clinics. 

Like RE-AIM, the good news is that Kirkpatrick gives you a solid head start on your training-related evaluation questions. 

Your overarching evaluation question might be something like: 

To what extent did training prepare people to [make the intended change]? 

Or 

How effective was the training at improving [desired behaviour]? 

From there, use the four levels of the model to ask specific questions or craft outcome statements (See below for a detailed explanation of each level):

  • Level 1: Reaction

  • Level 2: Learning

  • Level 3: Behavior

  • Level 4: Results

Step 3: Reporting  

I don’t think I’ve ever used the Kirkpatrick levels explicitly in my reporting. I think most audiences are not interested in the theory of evaluating a training program but are more interested in answering “did the training work?” As I mentioned, Kirkpatrick has usually been only a part of my evaluation planning, so, similarly, reporting on the effectiveness of training is usually only one part of my reporting.  

Because Level 1 of Kirkpatrick assesses formative questions about training – things that you could change or adapt before running the training session again, I have often produced formative reports or briefs that summarize just Levels 1 and 2 of Kirkpatrick. This promotes utilization of evaluation results! 

Using surveys (or a test!) likely gives you some quantitative data for you to employ your data viz skills on. But keep in mind that it’s not necessary for you to report all of the data you gather. You likely don’t need a graph showing your reader that the participants thought the room was the right temperature. Sometimes less is more and a couple of statements like “Participants found the training environment to be conducive to learning and found the training to be engaging. However, the days were long, and they recommended more breaks.” can cover a lot of your Reaction results. We’ve got lots of resources to help you with your report design. 

Sometimes reports, particularly interim reports, are due before you can get to behaviour change or results. This is actually one of the criticisms of Kirkpatrick – that many evaluations will cover Levels 1 and 2 thinking them sufficient but fail to invest the time and resources into ensuring the behaviour changes and outcomes are captured. This is one of the reasons that planning an evaluation concurrently with project design is helpful and can prevent these shortcuts. 


How to use the 4 levels of the Kirkpatrick Model

LEVEL 1: REACTION 

Reaction is about all those things you think immediately after you’ve attended a session. These are less about what you learned, and more about: Was the trainer effective? Was the environment supportive to learning? Was the day interactive and fun, or didactic and tiring? 

Reaction is all of those things that influence learning. They may seem of less importance but contribute a lot to how much a person learns, retains and acts on. 

Let’s use a scenario where you are evaluating a training program designed to teach participants how to implement COVID-19 safety protocols in the workplace. 

For Reaction, the evaluation is almost entirely content agnostic, so it matters less what the training program is about, and more about the delivery, for example: 

The venue was appropriate. 

The presenter was engaging. 

The training was relevant to my work. 

This is almost always captured in a post-training survey, which of course could be paper for in-person events, or QR codes/links for virtual events. It could be emailed out to participants after the session, but we all know that response rates are much better when you carve out 5 minutes at the end of the last session to complete the evaluation.  

The learning here is about how you can tweak the delivery of the training. Was the room too cold? Was the presenter about as engaging as Ferris Bueller’s homeroom teacher? 

LEVEL 2 – LEARNING 

As the name says, level 2 is about assessing what the participant learned. I like to think: Knowledge, Skills, and Attitudes. Assessing learning can (and arguably should) touch on each of these. So, your questions might be: 

I can explain our COVID-19 safety policies. [knowledge] 

I understand why our COVID-19 safety policies are important. [knowledge] 

I am confident that I can enforce our COVID-19 safety policies. [attitude] 

I learned 3 ways to build buy-in about our COVID-19 safety policies. [skills] 

As these examples imply, it’s very common for these questions to be included on a post-workshop survey. I usually embed them with the Reaction survey, so participants fill out one survey after training. I try to keep the total question count under 20, usually Likert Scale, with some opportunity for qualitative feedback. 

TIP: If it’s important to you to be able to say that the training was the reason for the results you get, you’ll want to consider a baseline survey – that is, a survey with the same learning questions as the post-training survey, but it’s completed before training. That way, any change that you see can be more strongly linked to the training that they received, as opposed to pre-existing knowledge, skills, or attitudes. 

If your training has learning objectives, this is a good place to look for what knowledge, skills or attitudes the program is intended to impact. 

Personally, I’ve always lacked confidence in results that come from self-assessed ratings of knowledge, skills, and attitude. We know there are several biases in play – including a tendency to not use the full range of a scale, and to rate yourself positively. One way to mitigate these confounders is to test the learning. Instead of asking for an opinion about what they learned, the post-training survey could be formatted to actually test the learning: 

List three COVID-19 safety protocols implemented at your workplace. 

Which of the following is a reason why these COVID-19 protocols were selected. [multiple choice] 

Describe one way to build buy-in with your staff around COVID-19 policies. 

The downside is that not only are these potentially more resource-intense to analyze and report on, but most programs and organizations are worried about the impression it gives to test participants or workshop attendees. We all hate tests, right!? I haven’t had much luck convincing a program to use testing as opposed to a self-rated survey. Let me know if you’ve fared any better! 

LEVEL 3 – BEHAVIOUR 

Here is where it gets a bit tricky. To measure behaviour you need to have access to the participants after a set amount of time; your participants need an opportunity to put their learning into action.  

Assuming you have email addresses or perhaps a reason to bring this group together again, I’ve assessed behaviour change through surveys (yes, surveys again). These surveys are usually sent over email after a pre-determined amount of time. The time interval depends on the learning you are hoping they achieved and the opportunities to implement it. I’ve used anywhere from 4 weeks to 6 months. 

In the last [time interval] I have: 

Explained why COVID-19 safety protocols are important. 

Used techniques learned at training to deal with an individual not wishing to follow COVID-19 safety protocols. 

It’s possible that you could re-administer the baseline/post-training survey as a [time interval] follow-up. This would help you to assess retention of learning and may get at some behaviour change too. The difference though is that the assessment of learning was likely opinion-based or in-the-moment, while assessment of behaviour change is retrospective – it’s not about what they have knowledge, attitude or skills to do, or what they intend to do, it’s about what they did do. 

Another option is to change your data collection strategy: 

  • Observation: can you watch attendees implementing the training? 

  • Interviews: can you gather data from people who didn’t attend the training to understand what changes they have seen? Or interview the training attendees to understand what they’ve put into practice (and in what context) 

  • Role playing: If you don’t anticipate being able to reach the attendees after a time period, perhaps role playing (and observation) could be a part of the training curriculum. Can attendees demonstrate what they have learned? 

  • Evidence of action: perhaps you can access evidence that proves the training resulted in action – maybe participants were asked to write business plans (how many were written?) or were asked to design and implement a communication strategy about COVID-19 policies (was it done?) 

In our scenario, observation may look like observing building entrance processes and observing, counting or noting the number of times a COVID-19 policy is explained or enforced. Or, you could survey staff to see how well they understand the COVID-19 policies (assuming their knowledge is a result of the training to select staff). The key to Level 3 is the demonstration of the behaviour. 

LEVEL 4: RESULTS 

Results brings everything full circle. It drives at the purpose of the training to begin with – what is the impact of the training. In our scenario, the ultimate goal may have been to have staff and patrons compliant with COVID-19 safety protocols. Assessment of Results will be a reflection of this. 

# of encounters with non-compliant staff 

# of encounters with non-compliant patrons 

# of COVID-19 outbreaks at the workplace 

Assessment of Results may start to blur the lines with the rest of your evaluation plan. Perhaps the training is part of a larger program that is designed to create a safe environment for staff and patrons, part of which was implementing COVID-19 policies.  

Assessment of Results likely requires an even longer time frame than Level 3, Behaviour.  


Drawbacks of the Kirkpatrick Model

I’ve found Kirkpatrick to be useful to ensure that evaluation of a training program goes beyond simply measuring reactions, however, the temptation to cut the method short and only measure reaction and learning is common practice. 

In perhaps the simplest approach, Kirkpatrick is a survey-heavy method – which relies on adequate response rates and is littered with biases. More rigorous methods to measure the four levels – observation, interviews – are likely more time consuming and resource intensive. It’s a tradeoff. If you go the survey route, here are some tips on how to use Likert scales effectively. 

Another common criticism of Kirkpatrick is the assumption of causality. The model takes the stance that good, effective training is a positive experience (Level 1), results in new learning (Level 2), and drives behaviour change (Level 3), which leads to achievement of your desired outcomes (Level 4). It fails to account for the environmental, organizational, and personal contexts that play a role. Whether or not an organization supports the behaviour change or empowers attendees to make change matters, regardless of how fantastic the training was. The further you move through the four levels of Kirkpatrick, the looser the link to causality. 


Kirkpatrick Model Return on Investment 

At some point, an unofficial 5th Level was added to Kirkpatrick – the Phillips Return on Investment (ROI) model (and sometimes this is not a new level but tacked onto Level 4 – Results). The idea here is that the cost of running training should be absorbed by the positive financial impact on the organizational improvements that come from the training and subsequent behaviour change.  

I’ve never actually used the ROI, but there are lots of resources out there to help you. 


Other Uses of the Kirkpatrick Model

Here at Eval Academy we often talk about the importance of planning your evaluation right along with program design. Kirkpatrick is no exception. The four levels can offer guidance and perspective on how to actually design a training program from the start. By working backwards, you can ask questions that ensure your training program has the right curriculum and approach to achieve your goals: 

  1. What are we trying to achieve? (Level 4 – Results) 

  2. What behaviours need to happen to realize that goal? (Level 3 – Behaviour) 

  3. What do attendees need to know or learn to implement those behaviours (Level 2 – Learning) 

  4. How can we package all of this into a high-quality, engaging workshop? (Level 1 – Reaction). 

I’ve only ever used Kirkpatrick as a planned-out evaluation approach. It may be trickier to use this model for training that has already happened: you’ll have missed the opportunity for baseline data collection, and importantly, depending on the time that has passed the Reaction results may be less reliable. 


I like Kirkpatrick because it’s simple and straightforward. This simplicity is actually one of the main criticisms about it, and its failure to recognize context. However, in my experience, the guidance offered by the four levels helps to shape my thinking about how to evaluate a training program. Because a training program is often only part of a larger initiative, perhaps the lack of context hasn’t been as blatant for me. 

I’d love to know your experiences with Kirkpatrick. Have you evaluated all four levels? Have you assessed ROI? 

For other real-life accounts of how to use an evaluation methodology, check out our articles on RE-AIM, Developmental Evaluation and Outcome Harvesting. 


Sign up for our newsletter

We’ll let you know about our new content, and curate the best new evaluation resources from around the web!


We respect your privacy.

Thank you!

Sources: 

Graham, P., Evitts, T. & Thomas-MacLean, R. (2008) “Environmental Scans: How useful are they for primary care research?”. Can Fam Physician. 54(7): 1022-1023. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2464800/ 

Polanin, J.R., Pigott, T.D., Espelage, D.L. & Grotpeter, J.K. (2019) “Best practice guidelines for abstract screening large-evidence systematic reviews and meta-analyses”. Res Synth Methods. 10(3): 330-342. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6771536/ 


Written by cplysy · Categorized: evalacademy

Jul 17 2022

How to complete an environmental scan: avoiding the rabbit holes

Whether starting a new program or making changes to an existing one, you’re going to be faced with numerous questions about how best to move forward. And some of those questions can feel pretty daunting! Many organizations use environmental scanning as a method of strategic planning to gain insights and gather information on how to put their best foot forward. 

This article is aimed at those who are new to environmental scanning and are looking for new ways to support program planning and improvement. 


What is an environmental scan?

Environmental scans started out as a tool for businesses to find and organize information that could be used for decision-making.

The process of environmental scanning includes finding, gathering, interpreting, and using information from the internal and external environments of an organization to help direct future action.

This method uses multiple strategies for information collection such as focus groups, in-depth interviews, surveys, literature reviews, and reviewing personal communications, policy analyses, and internal documents.

The results of an environmental scan can be very useful in helping an organization or program shape its goals and strategies.


What’s the difference between an environmental scan and a literature review?

Unlike a literature review that searches for published, peer-reviewed articles, an environmental scan also examines unpublished literature and publicly available information.

An environmental scan also incorporates methods such as interviews and focus groups not used within literature reviews.


Why should I complete an environmental scan?

Scanning the environment is an important part of strategic planning and has been linked to improved organizational performance.

Environmental scans can help plan for the future, provide evidence about potential directions for an organization or program, raise awareness of risks or issues, or help to initiate a new program.

The diversity of information sources and types of information gathered through environmental scans have helped organizations to effectively plan and implement programs across a variety of sectors.


When should I do an environmental scan?

You can take a reactive approach (e.g., a challenge has arisen that needs to be addressed) or a proactive approach (e.g., a new program is being implemented and you want to ensure its success) to scanning.

Scanning is typically more frequent when an organization or program has higher levels of perceived uncertainty, such as when taking a new direction or during its start-up.


How do I complete an environmental scan?

To complete an environmental scan, we typically follow six main steps: 

  1. Identify the purpose of the environmental scan and your topics of interest 

  2. Identify the research question(s) 

  3. Identify the activities you will complete and where you will look for information 

  4. Create a list of keywords and search terms 

  5. Catalogue the information systematically  

  6. Present the information in a way that is useful for your organization 

 

Step 1: Identify the topics of interest and the purpose of the environmental scan 

Before jumping onto the internet to search for information, it is first important to specify the purpose for the environmental scan and identify your topics of interest. This will help to anchor the process, focus your time and resources, and avoid those rabbit holes! Although an environmental scan should remain flexible to allow for new questions that might arise, having a clear scope will help you stay focused. 

For example, let’s say we’re looking to create a new program in the Vancouver region that aims to provide mental health support to isolated and elderly individuals, but we’re not sure what similar programs are already out there. To avoid duplicating services and making sure we reach a portion of the target population that isn’t already being served, the purpose of our environmental scan would be to: 

“Support the development of a program that aims to improve the mental health of the isolated and elderly by learning about available programs and identifying their strengths and weaknesses.” 

Our topics of interest would then be: 

  • Examples of programs that are currently available in the Vancouver region which aim to improve the mental health of people aged 75+ who are isolated (e.g., live alone) 

  • Identifying what the programs currently do well including effective modalities, treatment options, and who they are reaching 

  • Identifying gaps in the programs such as treatment options that are not provided and target populations that are missed 

Step 2: Identify the research question(s) 

Now that you’re clear on your environmental scan purpose, you can narrow this even further by identifying your research questions. This is the broad rule for knowing when to stop your search and will help you to decide whether an article or website is worth continuing to explore.  I tend to recommend having between 1 and 3 research questions for an environmental scan that dig a little deeper into your topics of interest to really pull out all the juicy details you’re looking for! 

The research questions for our example environmental scan could be: 

  1. What programs are currently implemented in the Vancouver region for the isolated and elderly (75+ years of age) which focus on improving their mental health? 

  2. How do these programs operate and what makes them effective? 

    • Who is the audience that they’re reaching and how do they reach them? 

    • What modalities do they use (e.g., group settings, individual therapy)? 

    • What treatment options do they implement? 

  3. Are there any gaps in these programs? 

    • Are there any of the target population that are not effectively captured by the program? 

Step 3: Identify what environmental scan activities you will complete and where you will look for the information  

Once we have our topics of interest, purpose, and research question, next it’s time to identify what activities you’ll complete and where you’ll look for the information to answer your questions. In an environmental scan, your activities can focus on understanding the internal (your organization’s) or external (other organization’s) environment to the particular topic. This will help to provide input into strategic thinking, decision making, and planning. 

For understanding the internal environment, appropriate activities could include: 

  • Reviewing organizational documents (e.g., organizational strategy, policies, or internal communications) 

  • Interviewing members of staff  

For understanding the external environment, appropriate activities could include: 

  • Grey literature searches on the internet (e.g., Google search) 

  • Interviews or surveys with organizations or individuals of interest identified through your online search 

Don’t forget that you’ll also need to present your findings in an appropriate format for your organization whether that’s a report, infographic, or a presentation! (See Step 6 for further info) 

Although looking online for information is important in an environmental scan, involving stakeholders can be the key to its success. Frequently grey literature searches like reviewing websites can leave you falling short on all the details needed to answer your research questions. Reaching out to key individuals at the organizations of interest with an invitation to take part in an interview or survey can help you to fill these gaps. For example, many Indigenous organizations will have reports and summaries or have participated in other scans and engagements, so having the opportunity for Indigenous organizations to recommend grey literature is important to help fill gaps in knowledge and understanding. 

Before reaching out to stakeholders, make sure you have a clear understanding of what information is needed from them. This can help with engagement fatigue and make sure you’re asking the right questions. You should only ask them for information that you are not able to uncover on your own. Create a clear plan for conversations with participants, such as having a clear set of questions or requests for further information. Be prepared to answer further questions about the environmental scan process and why you’re interested in learning more about their work and organization. Lastly, be aware of who you are interviewing and methods for appropriately and respectfully engaging them. For our example, if you are interviewing Indigenous Elders then be aware of the protocol with visiting Elders and what you need to bring to respect the knowledge and teachings you will receive from Elders. 

The type and volume of activities you’ll complete are often affected by the amount of time that you have to finish the environmental scan. If a timeline isn’t established for you by a funding agency or by your leadership, it is important that you create your own timeline from the outset to help you plan and stay on task. If interviews of surveys are part of your environmental scan, make sure you allocate yourself plenty of time for creating the survey tool and interview guides, reaching out to your participants, collecting and analyzing the data, and interpreting your results.  

The activities for our example environmental scan could be: 

  • Environmental scan of grey literature looking for examples of similar programs in Vancouver. Best practice examples from other geographies may also be included if “gold standards” are discussed 

  • Contacting organizations who have implemented similar programs for follow-up questions through telephone interviews 

  • In-person interviews with members of the target audience to learn how future programs could be improved for them 

  • Presenting a selection of similar programs and treatment options through a final report 

Step 4: For online searches, you’ll need to create a list of key words and search terms that you will use 

To make sure you’re looking for information in the right places while searching online, it’s important that you create a list of key words and search terms to help guide your search. 

To start this process, make a note of all the key terms that will help you to capture your topics of interest. Cast your net wide to make sure you don’t miss anything. Then break down these topics into clear concepts and keywords. 

For our example environmental scan, a list of key terms could be:  

  • Mental health programs 

  • Isolated 

  • Elderly 

  • Vancouver 

  • Effective modalities 

  • Treatment options 

Using key terms such as “Indigenous”, “Female”, “LGBTQ2S+”, “Newcomer”, “Black” amongst others will help ensure an intersectional lens to your search. 

The next step is to identify any of these concepts that could be expressed using related terms or synonyms. You can make a list of these under each of the key terms. 

Boolean searching through creating search strings is a common way to look for information in online databases and can also help to frame internet searches. Boolean searching uses connector works such as AND, OR, and NOT to create phrases based on rules and search logic.  

We’ll dive deeper into how you can systematically search online databases in our upcoming article on literature reviews! 

Step 5: Catalogue the information systematically 

Once you’ve found your sources, it is important to catalogue the information in a systematic way that links back to your research questions. Using a table like the one below that breaks the sources down by your topics of interest can make it very clear and help you when having to present your findings. 

Once you’ve catalogued your information systematically, you can then reach out to organizations and key stakeholders to request their participation in an interview or a survey to help you fill in the gaps.

Step 6: Present the information in a way that is useful for your organization 

Once you’ve organized, analyzed, and interpreted your information, it is crucial that you present your findings in a way that is useful for your organization. If an appropriate means of presenting the findings is not given to you by leadership, consider your audience and how the information will be used. This will help you to decide the best way of presenting your results. Reflect again on your topics of interest and make sure you present your information in a way that answers the research questions whether that’s in a summary report, infographic, or presentation to your organization. Lastly, make sure your work is disseminated and shared with all relevant stakeholders! This includes disseminating your work to anyone who participated in the scan, including any Indigenous stakeholders to align with OCAP principles. 


What’s your experiences with Environmental Scans? Comment on this article or connect with us on LinkedIn or Twitter! 


Sign up for our newsletter

We’ll let you know about our new content, and curate the best new evaluation resources from around the web!


We respect your privacy.

Thank you!

Sources: 

Graham, P., Evitts, T. & Thomas-MacLean, R. (2008) “Environmental Scans: How useful are they for primary care research?”. Can Fam Physician. 54(7): 1022-1023. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2464800/ 

Polanin, J.R., Pigott, T.D., Espelage, D.L. & Grotpeter, J.K. (2019) “Best practice guidelines for abstract screening large-evidence systematic reviews and meta-analyses”. Res Synth Methods. 10(3): 330-342. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6771536/ 


Written by cplysy · Categorized: evalacademy

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 121
  • Go to page 122
  • Go to page 123
  • Go to page 124
  • Go to page 125
  • Interim pages omitted …
  • Go to page 310
  • Go to Next Page »

Footer

Follow our Work

The easiest way to stay connected to our work is to join our newsletter. You’ll get updates on projects, learn about new events, and hear stories from those evaluators whom the field continues to actively exclude and erase.

Get Updates

Want to take further action or join a pod? Click here to learn more.

Copyright © 2026 · The May 13 Group · Log in

en English
af Afrikaanssq Shqipam አማርኛar العربيةhy Հայերենaz Azərbaycan dilieu Euskarabe Беларуская моваbn বাংলাbs Bosanskibg Българскиca Catalàceb Cebuanony Chichewazh-CN 简体中文zh-TW 繁體中文co Corsuhr Hrvatskics Čeština‎da Dansknl Nederlandsen Englisheo Esperantoet Eestitl Filipinofi Suomifr Françaisfy Fryskgl Galegoka ქართულიde Deutschel Ελληνικάgu ગુજરાતીht Kreyol ayisyenha Harshen Hausahaw Ōlelo Hawaiʻiiw עִבְרִיתhi हिन्दीhmn Hmonghu Magyaris Íslenskaig Igboid Bahasa Indonesiaga Gaeilgeit Italianoja 日本語jw Basa Jawakn ಕನ್ನಡkk Қазақ тіліkm ភាសាខ្មែរko 한국어ku كوردی‎ky Кыргызчаlo ພາສາລາວla Latinlv Latviešu valodalt Lietuvių kalbalb Lëtzebuergeschmk Македонски јазикmg Malagasyms Bahasa Melayuml മലയാളംmt Maltesemi Te Reo Māorimr मराठीmn Монголmy ဗမာစာne नेपालीno Norsk bokmålps پښتوfa فارسیpl Polskipt Portuguêspa ਪੰਜਾਬੀro Românăru Русскийsm Samoangd Gàidhligsr Српски језикst Sesothosn Shonasd سنڌيsi සිංහලsk Slovenčinasl Slovenščinaso Afsoomaalies Españolsu Basa Sundasw Kiswahilisv Svenskatg Тоҷикӣta தமிழ்te తెలుగుth ไทยtr Türkçeuk Українськаur اردوuz O‘zbekchavi Tiếng Việtcy Cymraegxh isiXhosayi יידישyo Yorùbázu Zulu