• Skip to main content
  • Skip to footer
  • Home

The May 13 Group

the next day for evaluation

  • Get Involved
  • Our Work
  • About Us
You are here: Home / Archives for Dana Wanzer

Dana Wanzer

Aug 16 2021

Dissertation RQ1: To what extent are interpersonal and research factors related to use?

This blog post is a modified segment of my dissertation, done under the supervision of Dr. Tiffany Berry at Claremont Graduate University. You can read the full dissertation on the Open Science Framework here. The rest of the blog posts in this series on my dissertation are linked below:

  1. Factors that promote use: A conceptual framework
  2. Defining evidence use
  3. Overview of my dissertation study: sample, recruitment, & measures
  4. Question 1: To what extent are interpersonal and research factors related to use? 
  5. Question 2: To what extent do interpersonal factors relate to use beyond research factors?
  6. Question 3: How do researchers and evaluators differ in use, interpersonal factors, and research factors? 

To answer this research question, I examined correlations between self-reported interpersonal (i.e., relationships, communication, cooperative interdependence, commitment to use, stakeholder involvement) and research/evaluation factors (i.e., relevance, rigor) with use (i.e., instrumental, conceptual, and process use). These correlations are shown in the figure below.

Overall, all of the interpersonal factors except for two items in the stakeholder involvement scale were moderately correlated with use, with correlations ranging from r = .19 to r = .38. Both research relevance and rigor were correlated with use, albeit relevance (rs between .24-.34) was more strongly correlated with instrumental and conceptual use than was rigor (rs between .15-.22).

These findings support two prior hypotheses I had coming into this research study:

  1. Interpersonal factors are perhaps more important for process use than for instrumental and conceptual use.
  2. Relevance is more strongly correlated with use than rigor, although this was only found for instrumental and conceptual use.

I also examined some demographic factors to see how they related to use. Partnerships that have been together longer had greater instrumental (r = .20), conceptual (r = .20), and process (r = .25) use, and also reported greater rigor (r = .16) and less conflict among partners (r = -.20). Partnerships with more members reported greater instrumental (r = .14) and conceptual (r = .15) use, lower quality relationships (r = -.18), lower commitment to use (r = -.17), and less conflict among partners (r = -.20). Interestingly, those reporting using an RCT reported higher levels of process use (d = .47), instrumental use (d = .39), and conceptual use (d = .45).

Lastly, I asked participants to respond to an open-ended question about the quality of relationships among members. These responses were coded into four levels of relationship quality, which you can see in the table below. Although not statistically significant, there was a slight increase in use across the four levels of relationship quality.

Written by Dana Wanzer · Categorized: danawanzer

Aug 09 2021

Dissertation: Overview of the study

This blog post is a modified segment of my dissertation, done under the supervision of Dr. Tiffany Berry at Claremont Graduate University. You can read the full dissertation on the Open Science Framework here. The rest of the blog posts in this series on my dissertation are linked below:

  1. Factors that promote use: A conceptual framework
  2. Defining evidence use
  3. Overview of my dissertation study: sample, recruitment, & measures
  4. Question 1: To what extent are interpersonal and research factors related to use? 
  5. Question 2: To what extent do interpersonal factors relate to use beyond research factors?
  6. Question 3: How do researchers and evaluators differ in use, interpersonal factors, and research factors? 

This study is a good example of working with what you end up with. The original methods of this study failed miserably, and we had to shift the methods so at least we ended up with something.

Originally, I sought to recruit practitioners through researchers and evaluators asking their practitioner partners. That would also aid in having actual partnerships represented. Wow did that fail miserably! In the end, I ended up with about 11 paired partners, but that was not enough given the methods I used.

Fortunately, I recruited through both the American Evaluation Association (AEA), and American Educational Research Association (AERA), as well as a bit through existing RPPs online and through social media. I ended up with roughly even numbers of researchers (n = 94), evaluators (n = 116), and practitioners (n = 82) after a meager ~13% response rate and roughly 50% completion rate.

The survey in this study went through the Questionnaire Appraisal System and cognitive interviews (n = 6) for pretesting. If you’re interested in what changed as a result of both those processes, check out the appendices in the full dissertation.

The survey first entailed asking participants whether they primarily identified as a researcher, evaluator, or practitioner in the particular partnership work they would talk about. Definitions were provided so it was clear what I meant by these terms. Next, to prime participants to think about their partnership, they were asked to provide some demographic information about their partnership, including sector, location, number of members, how long it had been together, and the purpose of the partnership.

Third, they were provided three scales on evidence use: one on instrumental use using a mixture of items from three separate questionnaires, one on conceptual use by NCRPP (2016), and one on process use that was a mixture of items from two separate questionnaires. Then participants completed scales on interpersonal factors: relationships (High Quality Work Relationships Scale by Carmeli et al., 2009), communication (combined scale), commitment to use (self-created), cooperative interdependence (adapted from Johnson & Norem-Hebeisen, 1979), and stakeholder involvement (adapted from Weaver & Cousins, 2004). Last, they completed two scales on research factors: relevance and rigor, both of which were self-created.

Finally, participants responded to more personal and partnership demographics, such as their position in the partnership, level of involvement and decision-making, education level, and more. The survey took participants a median 23 minutes to complete, making the survey a bit too long and likely contributed to the low participation and completion rates. 

Most participants worked in education (57%), in the United States (88%), and had been in partnership for 5 or more years (41%). The primary purpose of the partnerships was either to conduct and use rigorous research (61%) or impact local improvement efforts (26%).

Want more details on the internal consistency of the scales, the items in the scales, or the sample characteristics? Check out Chapter 2 and the appendices in the full dissertation.

Written by Dana Wanzer · Categorized: danawanzer

Aug 02 2021

Defining evidence use

This blog post is a modified segment of my dissertation, done under the supervision of Dr. Tiffany Berry at Claremont Graduate University. You can read the full dissertation on the Open Science Framework here. The rest of the blog posts in this series on my dissertation are linked below:

  1. Factors that promote use: A conceptual framework
  2. Defining evidence use
  3. Overview of my dissertation study: sample, recruitment, & measures
  4. Question 1: To what extent are interpersonal and research factors related to use? 
  5. Question 2: To what extent do interpersonal factors relate to use beyond research factors?
  6. Question 3: How do researchers and evaluators differ in use, interpersonal factors, and research factors? 

Broadly speaking, researchers and evaluators are all interested in promoting evidence use. However, use is a multifaceted concept, and there are a multitude of frameworks defining different types of use (Nutley et al., 2007). Researchers are often most interested in instrumental use or using evidence directly to make changes in programming or in decision-making (Nutley et al., 2007). However, Weiss (1979) recognized early on that instrumental use did not occur frequently and instead proposed conceptual use, or using evidence to influence one’s thinking or attitudes about the problem (Carol H. Weiss & Bucuvalas, 1980). Some researchers have promoted the idea of a continuum of use that goes from conceptual uses (e.g., increased awareness, knowledge, and understanding) to more instrumental uses (e.g., shifts in attitudes, perceptions, and ideas; changes in practice and policy; Nutley et al., 2007).

With the realization that use is not limited to the findings of research or evaluation, process use—the behavioral (e.g., instrumental) or cognitive (e.g., conceptual) changes after participating in a research or evaluation endeavor (Patton, 1997)—was promoted as another type of use that occurs as a result of participating in the decision-making of a research or evaluation study. Both findings and process use can lead to instrumental and conceptual use (Alkin & King, 2016, 2017).

Other types of use also exist. For instance, there are longer-term, more incremental uses such as influence (i.e., research or evaluation producing effects by intangible or indirect means; Kirkhart, 2000) and enlightenment (i.e., the gradual change of ideas into the organization after a longer time period than conceptual use; Weiss, 1977). There are primarily political uses such as symbolic (i.e., commissioning research or evaluation without intent in applying the results; Leviton & Hughes, 1981), legitimative (i.e., used to justify decisions already made about the organization; Patton, 2008), persuasive (i.e., using evidence to support one’s position; Leviton & Hughes, 1981), and imposed use (i.e., a higher-level authority mandates some form of evidence use; (Carol Hirschon Weiss, Murphy-Graham, & Birkeland, 2005). Use can also not occur (i.e., nonuse) or could be used inappropriately (i.e., misuse) or in unintended ways (i.e., unintended use) (Patton, 2008).

Rather than focusing on “promoting evidence use,” I encourage you to think critically about what type of use you want to happen and how you will promote that particular type of use. For example, in my dissertation study I found that relationship quality was related to instrumental, conceptual, and process use, but relevance was only related to instrumental and conceptual use and commitment to use was only related to process use. This suggests specific factors relate to specific types of use.

Written by Dana Wanzer · Categorized: danawanzer

Jul 26 2021

Factors that promote use: A conceptual framework

This blog post is a modified segment of my dissertation, done under the supervision of Dr. Tiffany Berry at Claremont Graduate University. You can read the full dissertation on the Open Science Framework here. The rest of the blog posts in this series on my dissertation are linked below:

  1. Factors that promote use: A conceptual framework
  2. Defining evidence use
  3. Overview of my dissertation study: sample, recruitment, & measures
  4. Question 1: To what extent are interpersonal and research factors related to use? 
  5. Question 2: To what extent do interpersonal factors relate to use beyond research factors?
  6. Question 3: How do researchers and evaluators differ in use, interpersonal factors, and research factors? 

There has been a lot of research examining factors that promote evidence use in both research and evaluation. Multiple frameworks for categorizing the different factors that promote evidence use have been proposed in the past (Alkin & King, 2017; Nutley, Walter, & Davies, 2007; Palinkas et al., 2011). In particular, King and Alkin (2018) stress that use occurs when certain types of users interact with certain types of researchers, who conduct research in a certain way. These categorization schemes can be summarized into intrapersonal (e.g., researcher and practitioner factors), interpersonal, organization, and research factors, which I’ve hypothesized their relationship in the figure below.

Note: The paths from interpersonal and research factors to evidence use were the only ones I tested in my dissertation study.

In this conceptual framework, the researcher and practitioner factors lead to evidence, but they also lead to evidence through research factors (e.g., rigor, relevance, timeliness, credibility) and organization factors (e.g., organizational capacity and size), respectively. Furthermore, researchers and practitioners must also work together to promote evidence use through interpersonal factors (e.g., relationship quality, communication, commitment to use, stakeholder involvement, cooperative interdependence).

Below, I will briefly discuss some of the literature on these five factors and how they promote evidence use.

  1. Organizational factors: Various researchers and evaluators have suggested organizational context affects evaluation use, including the age, development, and size or scope of the organization or program; institutional arrangements and autonomy within the organization; and community influences on the program (Alkin & King, 2017).
  1. Research/evaluation factors: Much research has examined research rigor on promoting use, but ample evidence suggests that (a) researchers and practitioners differ on their definition of rigor and (b) rigor is perhaps necessary but not sufficient to promote use. Rather, the research or evaluation must also be relevant, trustworthy, useful, and accessible to promote use (Alkin & King, 2017; Tseng & Gamoran, 2017a).
  1. Practitioner factors: These factors include the practitioners’ predispositions about research or evaluation, including their prior experience with evaluation (Taut & Brauns, 2003) and anxiety towards evaluation (Donaldson, Gooler, & Scriven, 2002), as well as their interpersonal skills and the personal factor. The personal factor is “the presence of an identifiable individual or group of people who personally care about the evaluation and the findings it generates” (Patton, 2008, p. 66). When such stakeholders are present, evaluations are more likely to be used. However, little research has empirically examined the personal factor’s effect on evaluations (Fleischer, 2014).
  1. Researcher/evaluator factors: Similar factors for the practitioner are also necessary to see in the researcher or evaluator, including their prior experience working with practitioners and their interpersonal skills. Like the personal factor, the interpersonal factor—the presence of an evaluator who personally cares about promoting use—has received some attention in the literature on promoting use.
  1. Interpersonal factors: There are a variety of interpersonal factors shown to promote use.
    • Communication quality: First, partners need to communicate effectively, which involves clear, frequent, and wide discussions across a multitude of media (Fleischer & Christie, 2009; Henrick et al., 2017; Maloney, 2017; Nutley et al., 2007). Furthermore, researchers need to communicate recommendations or implications of the evidence produced to support practitioners’ decision-making (Cousins & Leithwood, 1986; Fleischer & Christie, 2009; Maloney, 2017; Masaki, Custer, Eskenazi, Stern, & Latourell, 2017; Nelson et al., 2009).
    • Practitioner involvement: Evaluators have increasingly recognized the importance of stakeholder involvement to promote use (Alkin & King, 2017; Froncek & Rohmann, 2019; K. Johnson et al., 2009). However, there are a wide variety of theories and approaches to stakeholder involvement that differ in their approach, and research has yet to fully examine whether and how practitioner involvement promotes evidence use.
    • Relationship quality: Ample research on collaborations have stressed the importance of relationships, identifying it as one of the core dimensions of an effective partnership and factor to promote use. Research on research-practice partnerships have thus far supported the importance of relationships in promoting use. However, much of this research treats relationships simply as trust, mutualism, and long-term commitment, which are important but insufficient for true relationships. Relationships also involve positive emotions such as liking one another, feeling close to each other, and feeling respect, dependability, warmth, and overall friendship (Berscheid, Snyder, & Omoto, 2004; Carmeli, Brueller, & Dutton, 2009; Dave et al., 2018; Marek, Brock, & Savla, 2015; Stephens, Heaphy, & Dutton, 2012). This allows partnerships to go beyond transactional relationships to communal relationships. Ample research from developmental, organizational, and social psychology support the importance of relationships in partnership work.

Although my dissertation only tested a few of these pathways in my study, my hope is this conceptual framework can help others examine other research questions in relation to evidence use. For example, to what extent do research factors partially mediate the relationship between researcher factors and evidence use? As another example, to what extent do both practitioner and researcher factors lead to interpersonal factors to promote use? These are important questions that will help the field better understand how to promote evidence use in research and evaluation studies.

Written by Dana Wanzer · Categorized: danawanzer

May 10 2021

Teaching like Evaluation

What if we taught like we evaluated? I have been imaging what teaching might look like if we approached it like we did our evaluation work. Just like there are a variety of different approaches and strategies teachers use in their courses (e.g., inclusive pedagogy, team-based learning, problem-based learning, lecture-based instruction) there are a variety of different approaches and strategies evaluators use in their evaluations (e.g., participatory, values-engaged, culturally responsive, democratic, developmental, utilization-focused, mixed methods, theory-driven).

Despite the variety of approaches to both teaching and evaluation, there is a general logic that underlies both teaching[1] and evaluation (Fournier, 1995; Scriven, 1980). At the heart of evaluation is a basic four step process:

  1. Establish criteria of merit
  2. Develop standards of performance along each criterion
  3. Measure performance and compare with standards
  4. Synthesize results into an evaluative judgment

As I critically rethink my teaching philosophy, I am increasingly considering how I can bring the general logic of evaluation—and the working logics of the approaches to evaluation I tend to use—to my teaching practice. I am starting to believe that if I evaluated the way we tend to teach that I would not retain clients for very long, and so I believe it is important I change how I approach my teaching.

The Typical Teaching Approach

From my experience teaching, reading about teaching, and discussing teaching with other instructors, this is a little like what most teaching seems to look like, aligned with the general logic of evaluation four-step process above:

  1. Instructors set the learning objectives prior to meeting with students, which often need to get approval at the department, college, and university level.
  2. Instructors determine what constitutes the grading rubric or expectations for each assignment (note: not usually per learning objective).
  3. Instructors measure performance of students, although there may be some peer assessment or outside assessment components (e.g., client feedback in the case of a service learning course).
  4. Instructors determine the overall grade of the student.

Note how instructors determine pretty much all of it, and often without consultation of the primary ‘stakeholder’ involved in the education process: students. If we were to apply this approach to our program evaluations, it would look something like this:

  1. Evaluators set the criteria for evaluating the program before meeting with the program itself but will require approval by governing bodies and funders first.
  2. Evaluators will develop standards of performance along those criteria, but again without any program input.
  3. Evaluators will measure the performance of the program, often not including programs in that performance measurement. However, a small piece of it may include other programs measuring your program’s performance, or getting feedback from some outside stakeholders.
  4. Evaluators determine the final evaluative judgment of the program. Again, no input given by stakeholders.

Realizing this made me cringe. This is not how I would ever approach my evaluations. Although I know some folks do their evaluations like this, most folks tend to take a more collaborative or participatory approach, align their evaluations with their program’s needs, being culturally responsive in their approach, adjust the evaluation according to the situation, and focus on promoting use.

Applying the logic of evaluation to teaching

So what if we were to instead apply how we typically approach our evaluation work to our teaching? This is what Fournier (1995) calls the working logics of evaluation. At the heart of any evaluation approach is the general logic of evaluation, but that general logic can look different depending on our approaches. Let’s see what teaching might look like if we apply my typical evaluation approach (e.g., utilization-focused, culturally responsive, contingency-based, theory-driven, etc.).

  1. Instructors collaborate with students to determine the learning objectives prior to the course. This is done based on a variety of factors, including what background knowledge and experience the student brings to the course, what they are hoping to get out of the course, and feedback and requirements from external sources (e.g., accreditation requirements, degree requirements, professional association recommendations, career expectations, research on the field of study, research on the pedagogy in the field of study). Although there may be set criteria across all students in the classroom, there is some individuality in the criteria per each student given individual needs.
  2.  Instructors collaborate with students to develop standards of performance. There are a variety of ways this could be done, including setting the standards of performance ahead of time together, the instructor providing standards and giving students an opportunity to reflect and revise, or letting students determine what their standard of performance is for an assignment. Again, external sources may have some sway here to help students get their degree and career they are aiming for.
  3. Instructors and students jointly measure performance. Peer evaluations and outside evaluations can continue to be used, but at least students are brought into the process through practices like asking students to grade themselves on their pre-determined standards of performance.
  4. Instructors and students jointly determine the overall grade and final evaluative judgment of the student. Again, the extent of control of this process by students may vary, but they can provide at least some input into the process and final judgment.

Moving forward

This reflection has led me to pursue changing how I approach my courses for the upcoming semesters. In particular, I have begun revising my courses to promote Ungrading (Blum, 2020), which at the heart of it promotes feedback and learning as opposed to instructor-led student evaluation. Some of the authors in the edited volume go so far as to say that evaluation should not be done at all, although I’m not sure I am willing to go so far as that. However, giving students some autonomy over their learning, meeting students where they are at, matching the course to their needs, and promoting the incorporation of feedback and learning are all things that I agree with and want to promote in my teaching. Just like program evaluation can both evaluate and promote learning, so too can our teaching, if we are thoughtful in how we approach it.


[1] I have not thought much about what the general logic of teaching is, but I would be curious if anyone knows of any references pointing to the topic.  The Fournier (1995) article points to references on the general logic of law, medicine, and science, but not education.

Written by Dana Wanzer · Categorized: danawanzer

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Interim pages omitted …
  • Go to page 6
  • Go to Next Page »

Footer

Follow our Work

The easiest way to stay connected to our work is to join our newsletter. You’ll get updates on projects, learn about new events, and hear stories from those evaluators whom the field continues to actively exclude and erase.

Get Updates

Want to take further action or join a pod? Click here to learn more.

Copyright © 2026 · The May 13 Group · Log in

en English
af Afrikaanssq Shqipam አማርኛar العربيةhy Հայերենaz Azərbaycan dilieu Euskarabe Беларуская моваbn বাংলাbs Bosanskibg Българскиca Catalàceb Cebuanony Chichewazh-CN 简体中文zh-TW 繁體中文co Corsuhr Hrvatskics Čeština‎da Dansknl Nederlandsen Englisheo Esperantoet Eestitl Filipinofi Suomifr Françaisfy Fryskgl Galegoka ქართულიde Deutschel Ελληνικάgu ગુજરાતીht Kreyol ayisyenha Harshen Hausahaw Ōlelo Hawaiʻiiw עִבְרִיתhi हिन्दीhmn Hmonghu Magyaris Íslenskaig Igboid Bahasa Indonesiaga Gaeilgeit Italianoja 日本語jw Basa Jawakn ಕನ್ನಡkk Қазақ тіліkm ភាសាខ្មែរko 한국어ku كوردی‎ky Кыргызчаlo ພາສາລາວla Latinlv Latviešu valodalt Lietuvių kalbalb Lëtzebuergeschmk Македонски јазикmg Malagasyms Bahasa Melayuml മലയാളംmt Maltesemi Te Reo Māorimr मराठीmn Монголmy ဗမာစာne नेपालीno Norsk bokmålps پښتوfa فارسیpl Polskipt Portuguêspa ਪੰਜਾਬੀro Românăru Русскийsm Samoangd Gàidhligsr Српски језикst Sesothosn Shonasd سنڌيsi සිංහලsk Slovenčinasl Slovenščinaso Afsoomaalies Españolsu Basa Sundasw Kiswahilisv Svenskatg Тоҷикӣta தமிழ்te తెలుగుth ไทยtr Türkçeuk Українськаur اردوuz O‘zbekchavi Tiếng Việtcy Cymraegxh isiXhosayi יידישyo Yorùbázu Zulu