• Skip to main content
  • Skip to footer
  • Home

The May 13 Group

the next day for evaluation

  • Get Involved
  • Our Work
  • About Us
You are here: Home / Archives for allblogs

allblogs

Nov 29 2022

Webinar: Introduction to Thematic Analysis: Understanding, conceptualising, and designing (reflexive) TA for quality research

Date: 29 November 2022

Offered by NVivo

Presenters: Virginia Braun and Victoria Clarke

Summary

“The process is not the purpose” – this quote really resonated for me, as did their note of “fitting method to purpose”. They aren’t trying to say that everyone should do reflexive TA. They are saying that you should know what type of TA you are using and to chose it purposely for what you are trying to achieve. And then do the analysis in a thoughtful way, a way that aligns (your ontology/epistemology should be consistent with your methods). I quite enjoyed this webinar and I think I’ve check out their book!

Detailed Notes

Resources:

  • Article on “Using Thematic Analysis in Psychology” in Qualitative Research in Psychology. Highly cited 145K+ citations on Google Scholar. They noted that some of those citations are critiques, which has helped to evolve their thinking.
  • Book Thematic Analysis: A Practical Guide (2022) (Sage)
  • Toward good practice in thematic analysis: Avoiding coom problems and be(com)ing a knowing researcher.
  • thematic analysis is an approach, not a single method, more like a family of things
  • family differences:
    • paradigmatic differences – what are we (conceptually) doing here? (e.g., describing/uncovering a single reality, c0-creating knowledge, etc.)
    • what paradigm are we operating in?
      • post-positivist – “small q” (using not numbers, but still using post-positivist understanding of the world)
      • non-positivist – “big Q” or “fully qualitative”
      • view of subjectivity – a threat (as understood in postpositivism, subjective seen as leading to bias) or a resource (as understood in Big Q)?
    • research practice differences:
      • conceptual (discovery vs production)
      • practical (identifying themes vs. developing analysis; themes inputs or outputs )
        • in reflexive TA they don’t talk about “emerging themes”, since they aren’t thinking that the knowledge is being discovered, it’s being produced
      • what is a theme?
        • united by focus/topic?
        • united by shared core concept?
  • Braun & Clarke’s way of clustering these approaches:
    • coding reliability
    • codebook versions (e.g., framework analysis)
    • reflexive versions (Braun & Clarke’s version is one of the most well-known of these)
    • other versions
  • Findlay’s differentiation
    • scientifically descriptive
    • artfully interpretative
  • TA is about developing, analyzing, an interpreting patterns across a qual dataset, involves systematic processes of data coding to develop themes
    • methods, not methodology (but you do still have a worldview/paradigm you are operating in when you choose and use a method)
    • focus on patterns of meaning aka themes across a dataset (but what’s s pattern?)
    • processes of coding –> themes
    • reporting ‘themes’
  • Reflexive TA
    • conceptualizing of analysis. Research Question + Research + Data are embedded within our disciplinary training & scholarly knowledge, sociocultural meanings, and values
    • Big Q/artfully interpretative
    • research subjectivity value –> reflexivity is essential
    • coding is open and organic (codes as analytic ‘entity’)
    • themes as analytic ‘output’
    • multiple ways to do reflexive TA (theoretical alignments, etc.)
    • six phase process to do reflexive TA:
      • familiarization
      • coding
      • generating/constructing (initial) themes
      • theme development and review
      • refining, defining, and naming themes
      • writing/stopping (it’s never “complete”, so you need to pick a point to end)
      • NB. The process is not the purpose, nor a guarantor of quality.,
      • NB. It’s not a linear process. Can go back to any phase at any time. Open & recursive.
  • Take home message: there is a diversity of TA; understand what type of TA you are using!
  • Common problems in published TA:
    • misunderstanding/misrepresenting (lack of diversity)
      • e.g., saying they are doing TA when they aren’t; aren’t adequately rationalize why TA is used; “swimming (unknowingly) into the waters of positivism”
      • e.g., saying there is no guidance for TA (when there’s lots in the literature)
      • e.g., a paper saying it’s reflexive TA but then says used reflexivity to “manage their bias”
      • inadequate description (e.g., just saying “followed the 6 phases of…” but not how you did it)
      • too many themes – thin/fragmented
      • deploying theoretically incoherent quality standards (e.g., saying intercoder reliability, which is not a coherent strategy for reflexive TA (would be appropriate for a coding relatability version of TA)
    • mismatches:
      • conceptual
      • methodology (practice based)
      • reporting
      • quality criteria
    • Become a more knowing practitioner:
      • don’t treat TA as a single method
      • talk about what version of TA you used
      • make choices thoughtfully & appropriately and show you made choices
      • engage in conceptual and design thinking
    • conceptual thinking
      • research values (awareness)
        • ontological
        • epistemological assumptions
      • design thinking: design coherence/methodological integrity (Levit et al, 2017)
    • 10 fundamentals of reflexive TA (for conceptualization & design coherence) (Braun & Clarke, 2022 paper – go read it!)
      • coding quality doesn’t depend on a multiple coders
      • analysis can’t be accurate or objective, but can be weaker/stronger
      • good quality coding/themes come from depth of engaging and distancing (the value of time!)
      • assumptions underpinning analysis need to be acknowledged – they don’t like “saturation” (they wrote a paper on this – a lot of qualitative approaches use this concept, but their paper talks about underlying assumptions of it)
    • 5 key challenges
      • fitting method to purpose (claims and practice)
      • working in a time and using reflexive TA coherently
      • time (tensions and pressures)
      • reporting (challenges in style, length, and from reviewers & editors)
      • choosing appropriate quality criteria (e.g., in health, often COREQ is often seen as the way to go, but it has assumptions embedded in it)
    • quality and being a reflexive (TA) practitioner:
      • you are not a robot of a mechanic
      • you are an adventurer
        • values-led
        • reflexive
        • active
        • positioned
        • thoughtFULL (aka, don’t just think of this as “rules to follow”)
    • Q&A:
      • content analysis vs. TA – there are different versions of content analysis, just as there are different versions of TA. They wrote a paper comparing TA to content analysis, grounded theory, and something else.
    • Twitter: @ginnybraun @drviciclark

Written by cplysy · Categorized: drbethsnow

Nov 29 2022

100% of Swag Proceeds Donated for Giving Tuesday

For Giving Tuesday, I’m donating 100% of swag proceeds to nonprofits.

  • Short-sleeve shirts
  • Long-sleeve shirts
  • Sweatshirts
  • Mugs
  • Totes
  • Pencil cases

Browse the shop: https://depictdatastudio.creator-spring.com/

Now through Tuesday, December 6, 2022.

– Ann K. Emery

Ann K. Emery of Depict Data Studio is wearing a shirt that says "Trend Setter" and is pictured alongside shirts, mugs, and pencil cases.

Written by cplysy · Categorized: depictdatastudio

Nov 28 2022

Implementation Science: The Best Thing You’ve Never Heard of as an Evaluator

This article is rated as:

Eval is my main role EA Traffic Light.jpg

I do some eval EA Traffic Light.jpg


Much like you could argue that research and evaluation are related, perhaps members of the same family tree, I like to think of Implementation Science as a distant cousin of evaluation; one that comes for a really fantastic visit once in a while.

I was granted the opportunity to take a course a couple of years ago that opened my eyes to the world of Implementation Science. The course introduced me to new approaches, frameworks, and tools that can be used in evaluation.

So, what is this world? Let’s start with a definition and some background. I think you’ll begin to see why Implementation Science is a great relative of evaluation. 


What is Implementation Science? 

Implementation Science is a field that examines the methods and strategies that enable the successful implementation of practice. It was only established in the early 2000s as a response to the too-common gap between best practice research and behaviour change.

Do we all know the story of handwashing? You can read a summary of it here, but quickly: Ignaz Semmelweis made the connection between handwashing and deaths on a maternity ward in 1847, but Semmelweis was ineffective at communicating and spreading his learning (i.e., no behaviour change was implemented!) It wasn’t until the 1860s-1880s when germ theory was established that handwashing became more commonplace and mortality rates decreased. Twenty years of unnecessary death. So, what was the problem? Implementation Science tries to answer that.

While the handwashing story sounds like a story that would happen in the 19th century, this persists today. The research community and academia are regularly determining the efficacy of treatments and interventions that are not spread to common practice. It’s now a commonly cited fact that it takes 17 – 20 years for clinical innovations to become practice. This paper written in the year 2000 shows how long it took for some effective therapies to reach even minimal rates of use:

Implementation Science was developed as means to address this research/practice gap.

So, when a new, evidence-based program or intervention is designed and is ready to be operationalized, Implementation Science directs you to focus on how best to do that:

  • Which stakeholders should you engage? 

  • What barriers or obstacles can you anticipate and mitigate? 

  • What enablers can you put in place? 

  • How can you be sure the program is implemented with fidelity? 

  • How can you implement in a way that promotes sustainability, or can uncover lessons for spread and scale? 

Implementation Science aims to bridge the gap from what you know to what you do and offers frameworks and structure to do this.

Now, despite being a relatively young field, there’s still a lot to dig through in Implementation Science. There are whole courses on Implementation Science (like the one I took) and it even has its own journal. I’ll focus on how it relates to evaluation and why you might use it, after just a little more context.


A Snapshot of the Implementation Science Toolbox

Implementation Science has an overwhelmingly large toolbox. There are many, many frameworks, models, and tools that can be applied in various contexts. I’ll summarize just a few, that are likely the most relevant to evaluation. I’m sharing some links that will lead you to more details if you want to dive in. After a brief description of a few models, I’ll follow with some real-world examples.

1. Knowledge to Action:

A process model used to describe and/or guide the process of translating research into practice. It has been adopted by the Canadian Institute of Health Research (CIHR) as a core component of their Knowledge Translation.

2. Determinant Frameworks:

Describe general types of determinants that are hypothesized to influence implementation outcomes (e.g., fidelity, skillset, reinforcement).

  • PARIHS (Promoting Action on Research Implementation in Health Services) uses the Organization Readiness Assessment to explore identified determinants.

  • Theoretical Domains Framework (TDF) integrates several theories into 14 core domains.

  • CFIR: (Consolidated Framework for Implementation Research) is a practical guide for assessing barriers and enablers during implementation. CFIR has a website dedicated to its use that includes guidance for use in evaluation, and a question bank.

3. Classic Theories

Rogers’ Diffusion of Innovation was one of the first theories to suggest that implementation, or diffusion of behaviour change, is a social process.

COM B (Capability, Opportunity, Motivation, Behaviour) uses a behaviour change wheel to support design of interventions.

NPT (Normalization Process Theory) aims at assessing how behaviour change is embedded into regular routine. It includes a 16-item assessment scale centered on four core constructs.

Obviously, that was a pretty high-level run-through of just a few models, but all those links will give you more detail should you wish to learn more about any of those models.


Where does evaluation fit into all of this?

When you read those questions posed above about how to implement a new intervention, did it make you think of anything in evaluation? It did for me! FORMATIVE EVALUATION! When it comes time to ask your formative evaluation questions, Implementation Science can be a great guide.  

When you think about conducting formative evaluation, aside from “what’s working” and “what’s not working”, it may be difficult to ensure you are asking questions about the right factors (or determinants!) that may impact successful program implementation. Much like RE-AIM provides structured guidance to ensure you pay attention to five core domains of public health interventions, these Implementation Science models, frameworks, and tools are offering us tips and tricks about potentially overlooked factors that contribute to program success or failure – things that we can be evaluating.

Let’s use CFIR as an example because it’s my favourite. I have often navigated to their question bank and asked myself, “Do my evaluation plan and data collection tools collect enough information to be informative about….”

  • the intervention itself? 

    • How might the strength and quality of the evidence for the intervention impact the outcomes I am evaluating?

  • the external context (or outer setting)?

    • How might policies or incentives impact fidelity to the intervention?

  • the inner setting?

    • How might team cultures or readiness for change affect the speed in which we expect to see the outcomes met?

In a recent project, the CFIR guide helped me to think about the importance of having a champion of their initiative. Without a champion in the project, who is passionate and promoting the importance of the work, this side-of-desk, in-kind-funded program could easily lose momentum. So, in a pulse survey tool I had created for the operational team I added a question asking the team if they felt they could identify a project champion. I’ve used the answers as indicators to discuss with the team whether the project is on track and adequately resourced.

Normalization Process Theory (NPT) is another one I’ve used. When an apparently great intervention isn’t getting any traction or is failing to spread or scale, why is that? It wouldn’t be out of scope to get an evaluator to help to answer that question. NPT offers guidance about where to look. Can you assess or measure:

  1. Coherence: does the team understand the intervention? 

  2. Cognitive Participation: is there sufficient direction and messaging to support this intervention?

  3. Collective Action: is the team empowered to act? Do they have the right tools and resources?

  4. Reflexive Monitoring: what are the team’s reactions to doing things differently?

On a project evaluating implementation clinical pathways in primary care, NPT helped to guide evaluation of the mental models of physicians. That is, how do physicians normally implement pathways: what enables use and what poses a barriers? The four core constructs of NPT helped to ensure we were evaluating actual behaviour change.


The distinction between formative (or process) evaluation and Implementation Science is blurry, for me anyway. I think there’s some evidence to say I’m not alone in this thinking. Implementation Science even claims RE-AIM as an implementation model, like here, here and here. But formative evaluation and Implementation Science are different. Formative or process evaluation aims to determine if an initiative is on track to meet outcomes, whereas Implementation Science doesn’t look at effectiveness overall, only of the implementation strategy. Implementation Science assumes the intervention is already evidence-based, proven best practice, whereas evaluation might be looking to build that evidence base.

I think being knowledgeable about Implementation Science can only make our evaluation work stronger. I don’t think we need to implement an entire framework with academic-level rigor for it to be useful. I like the “borrow and steal” approach, where I feel like Implementation Science is giving me insider information, pointing me to look at proven determinants of program success that might sometimes be overlooked in our traditional evaluation frameworks.

Do you have other fields that you like to borrow and steal from? Let me know what they are!

Written by cplysy · Categorized: evalacademy

Nov 28 2022

Developmental Evaluation: An overview

This article is rated as:

I'm new to eval EA Traffic Light.jpg


As you might know, through our sister company, Three Hive Consulting, we support healthcare and non-profit organizations to achieve their mission by uncovering insights that drive impact through evaluation. 

Recently, we’ve had several clients ask us about Developmental Evaluation (DE) including what it is and how it’s done. So we’ve pulled together a few main points on DE below.


How does Developmental Evaluation work?

  • Developmental Evaluation (DE) facilitates close-to-real-time feedback about what is working well and what’s not, therefore facilitating parallel streams of planning, acting, and doing

  • DE tailors your evaluation to your needs and the specific context

  • In contrast to formative or summative evaluation, DE supports the creation, development, or adaptation of a program as it is being implemented to inform development

  • As part of DE, the evaluator is embedded in the program as a member of the team


What Developmental Evaluation is NOT

  • Developmental Evaluation (DE) is not an evaluation method, there is no one right way of doing it

  • DE is not about product or service Improvement


When should Developmental Evaluation be used?

  • Developmental Evaluation (DE) is for emergent, volatile, and exploratory programs, those that don’t know what activities to do or steps to follow to meet their goals

  • DE is most appropriate when working in complex environments where the route to change is non-linear and cannot be easily predicted

  • DE is suited to socially complex situations, innovation, radical program re-design, and crises


“Developmental Evaluation (DE) is an investigative evaluation approach that supports the development of social change initiatives in complex or uncertain environments.”

— Michael Quinn Patton, 2008


Have you used Developmental Evaluation before? Let us know your thoughts in the comments below!

Written by cplysy · Categorized: evalacademy

Nov 27 2022

Five elements to include in your reporting style guide

This article is rated as:

Eval is my main role EA Traffic Light.jpg

I do some eval EA Traffic Light.jpg

I'm new to eval EA Traffic Light.jpg


A style guide is a time-saving tool that helps you be consistent in your formatting when creating client and public-facing products, from evaluation plans to reports and presentations. When working with a team, style guides ensure that team members are working efficiently as they create evaluation products. 

Ideally, you start a style guide early in the project, well before the first deliverable is due, but it can be made at any point. Creating a style guide doesn’t have to be complicated, it can be laid out in whatever way makes the most sense for your work.

There are five elements I include in every style guide:

  1. Fonts

  2. Heading Levels

  3. Colours

  4. Icons, Logos, & Images

  5. Charts


1. Fonts

One of the simplest ways to make your project stand out is to move away from using default fonts. You don’t have to download or buy fonts, but if your team uses both PC and Macs, be wary of fonts that aren’t available on both operating systems. Google fonts has free font downloads but beware that you need to embed them in the document if you want your clients to be able to see them. 

A few font rules to remember:

  • Serif fonts (the ones with little feet) are better for body text, for example

    • Garamond

    • Georgia

    • Palatino Linotype

  • Sans-serif fonts (the ones without little feet) are better for headings, for example

    • Arial

    • Calibri

    • Franklin Gothic Book


2. Heading Levels

If you aren’t using Microsoft Word’s built-in Heading Styles, let me take a second to convert you. Using their pre-set heading templates helps you build structure into your documents and allows you to use the navigation pane to quickly find pieces of information and build an automatic table of contents. Gone are the days of manually bolding and italicizing different headings to help break up your documents.

Even if you build reports in PowerPoint, creating different heading styles will help readers move through your document with ease. The defaults in Word are a great place to start. Under the design tab, you can select different styles or create your own custom style sets. I suggest starting with 3-4 heading levels, a style for quotes, and a style for information you want to stand out (call-out boxes, figures, or captions).

Quick tricks for modifying heading styles:

  • Add extra space before and after level 1 and 2 headings by clicking “format>paragraph” in the Modify Style box. This will give your document much needed white space (see our article on renovating your evaluation report for more on white space)

  • Indenting heading levels 3 and 4 can also help them stand out and add some white space

  • Adding borders (I’m particular to a wide left border) can help visually denote or colour code headings


3. Colours

Using a unique colour palette is another way to make your report stand out. There’s a lot that goes into choosing colour-blind accessible colour palettes with appropriate contrast, so I won’t get into that here. But if your client, company, or project has a brand colour scheme or logo, that’s a great place to start. Colour should be used intentionally so your style guide should include how you want the colour to be used. Check out our article on using colour consistently.  

Quick tips:

  • Save your colour palette as a custom palette so you can transfer it easily across Microsoft programs.

  • Determine what colours will be used in your charts to make sure that the same hues and shades are used in the same ways. In the example chart below, your style guide should outline what shades will be used for each response option (strongly agree, strongly disagree, etc.) so that all graphs with five-point scale are consistent.


4. Icons, logos, and images

Typically, I add icons in at the end, but if you are writing a longer report and hope to use icons all the way through, a style guide is a tool to help organize them. Pick icons that are of a similar style (e.g., thick vs thin lines, filled in or not). Add all company, client, or program logos you need in high-quality formats to the style guide or save them in a folder together.

See our article on using images in your reports.


5. Charts

Determining the basics of how your charts will look helps to make sure you aren’t spending hours as you wrap up your report by changing charts to look the same. Determine the font size of your legend, axis titles, labels, and chart titles ahead of time to keep them consistent, especially if you’re working with a team.

We have an article to help you improve your charts if you need guidance. 


At the end of the day, your style guide is exactly that, a guide or starting place; don’t worry if it needs to change slightly as you put your report or presentation together.

Written by cplysy · Categorized: evalacademy

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 109
  • Go to page 110
  • Go to page 111
  • Go to page 112
  • Go to page 113
  • Interim pages omitted …
  • Go to page 310
  • Go to Next Page »

Footer

Follow our Work

The easiest way to stay connected to our work is to join our newsletter. You’ll get updates on projects, learn about new events, and hear stories from those evaluators whom the field continues to actively exclude and erase.

Get Updates

Want to take further action or join a pod? Click here to learn more.

Copyright © 2026 · The May 13 Group · Log in

en English
af Afrikaanssq Shqipam አማርኛar العربيةhy Հայերենaz Azərbaycan dilieu Euskarabe Беларуская моваbn বাংলাbs Bosanskibg Българскиca Catalàceb Cebuanony Chichewazh-CN 简体中文zh-TW 繁體中文co Corsuhr Hrvatskics Čeština‎da Dansknl Nederlandsen Englisheo Esperantoet Eestitl Filipinofi Suomifr Françaisfy Fryskgl Galegoka ქართულიde Deutschel Ελληνικάgu ગુજરાતીht Kreyol ayisyenha Harshen Hausahaw Ōlelo Hawaiʻiiw עִבְרִיתhi हिन्दीhmn Hmonghu Magyaris Íslenskaig Igboid Bahasa Indonesiaga Gaeilgeit Italianoja 日本語jw Basa Jawakn ಕನ್ನಡkk Қазақ тіліkm ភាសាខ្មែរko 한국어ku كوردی‎ky Кыргызчаlo ພາສາລາວla Latinlv Latviešu valodalt Lietuvių kalbalb Lëtzebuergeschmk Македонски јазикmg Malagasyms Bahasa Melayuml മലയാളംmt Maltesemi Te Reo Māorimr मराठीmn Монголmy ဗမာစာne नेपालीno Norsk bokmålps پښتوfa فارسیpl Polskipt Portuguêspa ਪੰਜਾਬੀro Românăru Русскийsm Samoangd Gàidhligsr Српски језикst Sesothosn Shonasd سنڌيsi සිංහලsk Slovenčinasl Slovenščinaso Afsoomaalies Españolsu Basa Sundasw Kiswahilisv Svenskatg Тоҷикӣta தமிழ்te తెలుగుth ไทยtr Türkçeuk Українськаur اردوuz O‘zbekchavi Tiếng Việtcy Cymraegxh isiXhosayi יידישyo Yorùbázu Zulu