• Skip to main content
  • Skip to footer
  • Home

The May 13 Group

the next day for evaluation

  • Get Involved
  • Our Work
  • About Us
You are here: Home / Archives for Dana Wanzer

Dana Wanzer

Sep 14 2022

Managing my Research Pipeline using Todoist

I’ve been writing for a long while about how Todoist is my brain. I’ve been using it since 2014 and every year it has gotten better and better at doing what I love: manage my tasks.

Want to learn how to use Todoist to help you have an efficient semester? Check out my mini course!

However, up until recently, I felt that was all it could do, and I kept looking for another system to better manage projects. I used Notion for a time, sharing four Notion templates for how I used it to track my research pipeline, student thesis projects, summer goals, and course prep notes. I used Notion for about six months before moving away from it. It was complicated, cumbersome, and just more hassle than benefit for me personally, although I still recommend it for folks interested in really powerful workspace platforms.

Next, I tried ClickUp. I was so excited to try it out and use it. I thought it would be the answer! And for many people it is, but it was similarly more than I needed and after 3 months of being all-in with trying it out, I went back to Todoist yet again.

These days, I am now using Todoist as my project management system. Does it have all the bells and whistles of other project management systems? Probably not. But does it do everything I need to do in an intuitive, easy to navigate fashion? Totally!

It’s particularly when they implemented the boards view that I fell in love with Todoist for the thousandth time. With Kanban style boards, which was really just another organizational scheme they implemented, I finally felt like I could do what a lot of other folks were using with other systems. And the first place I used it with was my research pipeline.

I’ve seen people use white boards, Excel spreadsheets, post-it notes, Trello boards, Notion, and so much more to track their research pipeline. And, in fact, I’ve tried them all too! But they would last only a few months before I forgot about them or just grew annoyed with them. That’s because they couldn’t integrate into my regular habits well. Sure, I could have done more to make them work, but I’d rather create systems that work for me than force myself into new systems that may not work.

Here’s how I setup my research pipeline in Todoist. First, I created a unique project called “Research Pipeline” which I treat separately from my research project tasks (note the larger project called “Research”).

Second, within that project I have four sections which I have personally chosen to be LitReview/Data Collection, Analysis/Writing, Under Review, and Published. Not shown here is a fifth section called Ideas that I jot down notes for future potential projects. Note that Todoist doesn’t like characters in section names which is why there are underscores, but you can play around with the names that work best for you.

Third, under “View” I choose to View as Board and Group by Default (which is sections). Then you get the view I have above!

Lastly, each research project or paper is added as an uncompletable task. Note that there are no circles to “complete” tasks because those get put in the Published category to celebrate my accomplishments! Although don’t look too closely there because many of my published articles aren’t there right now because I implemented this system after publishing most of my current articles 🙂

I have been using the Description field to put the journal or author information. I have been using sub-tasks and comments to organize some broad thoughts about the papers or projects that aren’t really specific tasks.

I then use the Research project (see the left menu for my projects list) to organize my tasks themselves. I choose to have tasks in the project on Todoist called Research with Labels for which research project they are part of. I then view as a board but group by labels. I could also organize them by sections and it would do a similar thing.

Learn more about how to implement Todoist into your semester planning systems in my mini course!

Written by Dana Wanzer · Categorized: danawanzer

May 20 2022

Comment on Evaluation is Not Applied Research by Dana Wanzer

In reply to Kerry Keating.

Lovely question, Kerry! I didn’t probe too much into epistemologies or ontologies underlying these definitions, but I can imagine you have something there with your question. I would bet that post-positivists would be more aligned with the “evaluation and research can (and should?) be value neutral or value-free” whereas constructivists would both recognize and embrace the values inherent in who we are and the work we do. Might be a useful endeavor for future research!

Written by Dana Wanzer · Categorized: danawanzer

Mar 19 2022

Ungrading and the Logic of Evaluation

Many of us who teach evaluation are familiar with the chocolate chip cookie exercise by Drs. Preskill and Russ-Eft in their book Building Evaluation Capacity. It’s a great introductory activity to help students understand and work through the logic of evaluation. Dr. Montrosse-Moorhead also wrote up a blog post detailing an adaptation of the activity suitable for online environments during the pandemic.

The basic premise of the activity is you pick something to be evaluated (e.g., chocolate chip cookies) and then go through the basic four steps of the logic of evaluation:

  1. Develop the criteria for evaluating the objects
  2. Set standards for performance
  3. Rate the objects based on those criteria and standards
  4. Synthesize an overall evaluative judgment (e.g., which is best)

When I began shifting some of my courses to ungrading (e.g., my evaluation courses and my interpersonal effectiveness course), one issue I had is that students had a difficult time determining what final grade they should get and documenting evidence for that final grade. I had some students who graded themselves too harshly on factors I didn’t deem too relevant (e.g., lateness wasn’t a big deal to me but many students penalized themselves even if they turned in things a few days late) and I had some students who graded themselves too leniently (e.g., did not turn in any reading reflections or journals, two of the major sets of assignments in the discussion based course and felt they had earned an A).

This difficulty in students grading themselves seemed to stem from not understanding how they have been graded in the past and therefore not knowing how to grade themselves, particularly when the criteria may be more amorphous. Students have previously only really been involved in step 3 of the logic of evaluation process: receiving their individual grades on assignments. They are rarely involved in developing criteria or setting standards, and the overall grade judgment can often feel mysterious or even unfair to students since they are also rarely involved in that process, either.

To combat this difficulty, I decided to apply the chocolate chip cookie evaluation activity to their mid-semester grade reflection letters. My university requires students know where they’re at grade-wise at Week 6 in the semester, presumably so they know whether they should consider dropping the course. Regardless, in an ungraded course this means that students need to practice determining what grade they are currently at and supporting that grade with evidence. This formative exercise helps them practice what they will do summatively at the end of the semester and allow them to not only document and defend their current grade but also reflect on what they will do differently for the rest of the semester to improve their performance.

This activity can be done in two ways, which I’ll document below.

Class-wide activity

In my interpersonal effectiveness course, I have mostly undergraduate students (n = 25) plus some graduate students in our program who had gone through the chocolate chip cookie exercise in our evaluation courses (n = 6). Thus, most of the students were unfamiliar with the activity but I had some students with familiarity who could support the other students. I clumped students into small groups to brainstorm ideas to bring back to the full class.

First, I had students in their small groups brainstorm criteria. I explained to them what criteria are and gave one example criteria (attendance). I had them pull up the syllabus again so they knew what activities, learning objectives, etc. we were doing throughout the semester. We then came together as a class, each group reporting their criteria out and clumping them together if folks came up with the same criteria. As a class, they decided on the following criteria: attendance, participation and engagement, journals, reading reflections, group project, individual project, and growth throughout the semester.

Second, I explained to students what standards are and some examples of what standards can look like (e.g., pass/fail, rubric). We worked together on the attendance criteria for developing standards as a class so they could see what that looked like. Then I assigned the remaining criteria to the groups so they had 1-2 criteria to set standards for performance. They then reported out to the class and we deliberated and discussed as a class until there was group consensus.

The third and fourth steps were then done individually in a mid-semester grade reflection letter. For the third step, I presented them the grading rubric (each criteria with their relevant standards) and had students rate their performance and document evidence and support for their ratings for each criteria. The fourth step involved them determining what final grade they would give themselves at this moment in the course and provide evidence and support for their overall judgment.

I also met individually with each student in short (usually 2-5 minutes) meetings to discuss their letter. This was an opportunity for us to chat 1-on-1, for students to practice those interpersonal skills we’d been learning throughout the semester, and for me to adjust their grading reflections as necessary. Thankfully, this process only involved increasing a few students’ grades because they were too harsh on themselves; I no longer had any students who were too lenient on grading themselves.

Individual activity

Although I have not yet done this, the other option would be to have students individually determine their own criteria and standards for performance, rate themselves, and then synthesize their overall evaluative judgment. This allows more flexibility for students and provides them more autonomy in the grading process.

I plan on doing this with my graduate students in our evaluation concentration moving forward because they will already have had practice doing the class-wide activity as part of the evaluation program. This will be an extension of their learning.

Regardless of the course I would implement this in, I would have students first submit their criteria and standards, perhaps doing a round of peer review or presentation to classmates to get feedback and more ideas of how they want to shape their criteria and standards and get my approval before finalizing them. Then I would have students do the third and fourth step the same as in the class-wide activity; the only difference would be that their criteria and standards would vary from student to student and would need to be presented in their letter clearly.

Conclusion

Overall, I am very pleased with how this activity turned out. Like I documented in my other blog post, I was increasingly concerned with how my teaching practices did not align with how I practiced evaluation. This process gives more power and control in the grading students to students in a way that supports them better in that process.

Written by Dana Wanzer · Categorized: danawanzer

Aug 30 2021

Dissertation RQ3: How do researchers and evaluators differ in use, interpersonal factors, and research/evaluation factors?

This blog post is a modified segment of my dissertation, done under the supervision of Dr. Tiffany Berry at Claremont Graduate University. You can read the full dissertation on the Open Science Framework here. The rest of the blog posts in this series on my dissertation are linked below:

  1. Factors that promote use: A conceptual framework
  2. Defining evidence use
  3. Overview of my dissertation study: sample, recruitment, & measures
  4. Question 1: To what extent are interpersonal and research factors related to use?  
  5. Question 2: To what extent do interpersonal factors relate to use beyond research factors?
  6. Question 3: How do researchers and evaluators differ in use, interpersonal factors, and research factors? 

I was also interested in seeing how use, interpersonal factors, and research factors differed between researchers and evaluators. This is somewhat of an extension from one of my research studies examining how researchers and evaluators define evaluation and differentiate evaluation from research (if at all).

Participants who self-reported themselves as evaluators rated relationships, communication, cooperative interdependence, and commitment to use higher than participants who self-reported as researchers. There were no differences between the two groups on research factors or evidence use.

I also compared correlations between interpersonal and research factors with evidence use between self-reported evaluators and researchers. Correlations for interpersonal factors and instrumental and conceptual use were generally stronger for researchers than for evaluators. Relevance was a stronger correlate of instrumental and conceptual use for evaluators than for researchers, whereas rigor was a stronger correlate of instrumental and process use for researchers than for evaluators. However, it should be noted that z-scores comparing correlation strength were not statistically significant.

Overall, although evaluators rated interpersonal factors higher than evaluators and no differences in reported use, there were stronger correlations between interpersonal factors and use for researchers. These findings suggest that evaluators and researchers are achieving the same outcomes (i.e., the same levels of instrumental, conceptual, and process use) but get to those outcomes differently. For example, researchers may see greater impact of interpersonal factors and rigor compared to evaluators whereas evaluators may see greater impact of relevance and stakeholder involvement.  

Written by Dana Wanzer · Categorized: danawanzer

Aug 23 2021

Dissertation RQ2: To what extent do interpersonal factors relate to use beyond research factors?

This blog post is a modified segment of my dissertation, done under the supervision of Dr. Tiffany Berry at Claremont Graduate University. You can read the full dissertation on the Open Science Framework here. The rest of the blog posts in this series on my dissertation are linked below:

  1. Factors that promote use: A conceptual framework
  2. Defining evidence use
  3. Overview of my dissertation study: sample, recruitment, & measures
  4. Question 1: To what extent are interpersonal and research factors related to use?  
  5. Question 2: To what extent do interpersonal factors relate to use beyond research factors?
  6. Question 3: How do researchers and evaluators differ in use, interpersonal factors, and research factors? 

The first question was simply how interpersonal and research factors were correlated with evidence use, whereas this research question examined the added variance explained of interpersonal factors above and beyond research factors. To do this, I analyzed the data using a structural equation model (SEM) looking at the latent interpersonal factor, one of the four stakeholder involvement items that did not load onto the latent interpersonal factor, the two research factors, and years in the partnership related to evidence use. This model, shown below, had a good fit: χ2 (56) = 125.20, p < .001, CFI = .848, TLI = .913, RMSEA = .059, SRMR = .059.  

When examining all five predictor variables, interpersonal factors continued to be a significant correlate of instrumental, conceptual, and process use. Relevance was only significantly correlated with instrumental and conceptual use. Control of decision-making was weakly correlated to instrumental and process use. Years in the partnership was correlated with instrumental, conceptual, and process use. Rigor was not significantly correlated with use.

Overall, my hypothesis was partially supported: interpersonal factors were more strongly related to process use, somewhat equally strongly related to instrumental use, and less strongly related to conceptual use compared to relevance whereas rigor was not a significant explanatory variable. However, the number of years in the partnership was somewhat equally strongly related to each type of evidence use compared to both interpersonal factors and relevance.

The findings of both research questions support the importance of interpersonal factors—and especially relationship quality—as well as research relevance for instrumental, conceptual, and process use. The slight variations in correlation strength among interpersonal factors and relevance with the three types of use suggest it may be important to focus on some aspects over others to promote the type of use of interest. For example, it may be more beneficial to focus on relationships and promoting a commitment to use for process use, relevance for conceptual use, and relationships and communication for instrumental use.

Written by Dana Wanzer · Categorized: danawanzer

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 6
  • Go to Next Page »

Footer

Follow our Work

The easiest way to stay connected to our work is to join our newsletter. You’ll get updates on projects, learn about new events, and hear stories from those evaluators whom the field continues to actively exclude and erase.

Get Updates

Want to take further action or join a pod? Click here to learn more.

Copyright © 2026 · The May 13 Group · Log in

en English
af Afrikaanssq Shqipam አማርኛar العربيةhy Հայերենaz Azərbaycan dilieu Euskarabe Беларуская моваbn বাংলাbs Bosanskibg Българскиca Catalàceb Cebuanony Chichewazh-CN 简体中文zh-TW 繁體中文co Corsuhr Hrvatskics Čeština‎da Dansknl Nederlandsen Englisheo Esperantoet Eestitl Filipinofi Suomifr Françaisfy Fryskgl Galegoka ქართულიde Deutschel Ελληνικάgu ગુજરાતીht Kreyol ayisyenha Harshen Hausahaw Ōlelo Hawaiʻiiw עִבְרִיתhi हिन्दीhmn Hmonghu Magyaris Íslenskaig Igboid Bahasa Indonesiaga Gaeilgeit Italianoja 日本語jw Basa Jawakn ಕನ್ನಡkk Қазақ тіліkm ភាសាខ្មែរko 한국어ku كوردی‎ky Кыргызчаlo ພາສາລາວla Latinlv Latviešu valodalt Lietuvių kalbalb Lëtzebuergeschmk Македонски јазикmg Malagasyms Bahasa Melayuml മലയാളംmt Maltesemi Te Reo Māorimr मराठीmn Монголmy ဗမာစာne नेपालीno Norsk bokmålps پښتوfa فارسیpl Polskipt Portuguêspa ਪੰਜਾਬੀro Românăru Русскийsm Samoangd Gàidhligsr Српски језикst Sesothosn Shonasd سنڌيsi සිංහලsk Slovenčinasl Slovenščinaso Afsoomaalies Españolsu Basa Sundasw Kiswahilisv Svenskatg Тоҷикӣta தமிழ்te తెలుగుth ไทยtr Türkçeuk Українськаur اردوuz O‘zbekchavi Tiếng Việtcy Cymraegxh isiXhosayi יידישyo Yorùbázu Zulu