Kyla Flanagan

Learn. Connect. Lead.

This week at the Taylor Institute for Teaching and Learning’s Post-secondary Conference on Learning and Teaching, I had the pleasure of presenting alongside Rachel Stewart on a topic close to both our hearts: program evaluation. With the conference theme of “Reassessing Assessment,” we shared our five-year journey evaluating the Undergraduate Research Initiative through a different…

By

“Lazy Genius” Principles for Program Evaluation

This week at the Taylor Institute for Teaching and Learning’s Post-secondary Conference on Learning and Teaching, I had the pleasure of presenting alongside Rachel Stewart on a topic close to both our hearts: program evaluation. With the conference theme of “Reassessing Assessment,” we shared our five-year journey evaluating the Undergraduate Research Initiative through a different lens—Kendra Adachi’s “Lazy Genius” principles.

Now, if you’re wondering what a self-help framework has to do with academic program evaluation, that’s exactly the point! Sometimes the most powerful insights come from unexpected places. In our work, we’ve found that traditional evaluation approaches can become cumbersome, overwhelming, and—dare I say it—occasionally soul-crushing for everyone involved.

The beauty of the Lazy Genius approach is its core philosophy: be a genius about the things that matter and lazy about the things that don’t. Isn’t that exactly what thoughtful evaluation should do? Focus our energy where it matters most while letting go of unnecessary complexity?

Below, I’ll share how we’ve adapted each of Adachi’s 12 principles to create a more sustainable, human-centered approach to program evaluation. Whether you’re drowning in assessment data or just starting to build an evaluation framework, I hope these reflections offer both practical guidance and a gentle reminder that evaluation can be both rigorous and kind.

As you can see, we were very happy and excited to present!

Here are the 12 Lazy Genius Principles as we have applied them to program evaluation

“Limit your decisions by making certain choices once.” p. 35

Standardizing evaluation frameworks with reusable templates, consistent metrics, and automated processes allows evaluators to focus on insights rather than logistics. This reduces decision fatigue, ensures consistency, and establishes clear response protocols. By making key decisions once, teams can dedicate their mental resources to meaningful analysis and program improvement instead of repeatedly redesigning evaluation processes.

🐛 Start Small

“Small steps are easy; easy steps are sustainable; sustainable steps keep moving.” p. 39

Start with targeted evaluation questions about processes (“how is implementation going?”) or outcomes (“what worked, for whom?”) rather than a comprehensive assessment. This focused approach builds confidence, shows value quickly, and creates momentum for more extensive evaluation efforts.

🧙‍♂️ Ask the Magic Question

“What can I do now to make life easier later?” p. 48

Investing in preparation by asking “What can I do now to make life easier later?” transforms evaluation practice. Clear documentation preserves institutional memory through staff changes. Pre-scheduled evaluation touch-points ensure consistent data collection without scrambling. Organized file systems with logical naming conventions simplify historical data retrieval. These foundational systems require upfront investment but save significant time while improving evaluation quality and usefulness.

🌼Live in the Season you are in

“As you live in your season, embrace being honest about how you feel and be willing to learn from what you find.” p. 64

Align evaluation with program rhythms —conduct interviews during quieter periods, analyze data during administrative seasons, and present findings when groups are most receptive. This synchronization prevents evaluation from becoming burdensome during busy times, ensures higher quality data collection,
and delivers insights when they can best influence program decisions.

🦾Build the Right Routines

“The routine itself isn’t what matters. It’s simply the on-ramp to help prepare you for what does.” p. 86

For us, what matters is learning about our program without burning out ourselves, resources, or participants. Take a bird’s eye view of your program cycle and consider: what do I/participants need to prepare for, and when do my team/participants feel most overwhelmed? Your answers will identify key points to build routines like check-ins or debriefs.

🏡Set House Rules

House rules are simple choices that support what matters to you and your people.” p. 88

No house rule fits every house. We had two important house rules were: No numbers without stories and no stories without numbers, and collect evaluation data from multiple sources, i.e., program records, surveys, interviews, etc.

🧹Put Everything in it’s Place

“As you put everything in its place, you’ll see what doesn’t belong. Keep only what matters.” p. 110

Data management matters! A good organizational system helps identify what to keep and what to discard. Simple strategies like file naming conventions help organize and access important information. When evaluation materials have designated homes, you eliminate time wasted searching and prevent data loss.

👩🏿‍🤝‍🧑🏼Let People in

“Let people into your everyday life without apology. You don’t need to be in a crisis to ask for help.” p. 126

Evaluation thrives on collaboration. Include representatives from diverse groups to shape design and review findings. This ensures you measure what matters, creates ownership, and increases the chance findings drive real improvements. Students add essential lived experience. Tailoring how you share results transforms data into action. Many people want to participate—don’t hesitate to ask!

🍪Batch it

“Batching is a specific kind of task done over and over before you move on to the next thing.” p. 129

Apply the “Batch It” principle by grouping similar evaluation tasks together for greater efficiency. Dedicate specific time for data collection, analysis sessions, and reporting rather than scattering these activities throughout your schedule. Consolidate feedback through thoughtfully timed group
sessions instead of numerous individual meetings.

🤹‍♂️ Essentialize

“Name what really matters. Remove what’s in the way. Keep only the essentials.” p. 156

Regularly question if your methods, metrics, and questions remain useful, interesting, and valuable to prevent evaluation bloat and focus on what matters. Know what’s fixed (core program elements, required reporting) and what’s flexible (collection timing, reporting formats). Let go of outdated measurements while adding ones that capture emerging priorities. Above all, prioritize utilization—your evaluation must produce actionable insights that drive real improvement.

👉Go in the Right Order

“The lack of tangible steps is killing you, isn’t it?” p. 159

There is an order to evaluation that makes life easier! The first three steps of any evaluation are: define what you are evaluating, determine your questions, and design your instruments. Evaluation success depends on proper sequencing.

💞💤Be Kind & Schedule Rest

“Value where you are now… Reflect on where you want to go…Celebrate accomplishments.” p. 198

Tough evaluation feedback isn’t failure —it’s discovering opportunities for growth. Build reflection periods into your timeline, creating intentional pauses to process findings before acting. These intervals prevent reactive decisions and allow insights to mature. By treating evaluation as a learning journey rather than just an accountability mechanism, you foster a culture where continuous improvement flourishes and both successes and challenges contribute to your program’s evolution.


Applying these Lazy Genius principles has transformed how we think about program evaluation—making it more sustainable, insightful, and ultimately more useful. This framework has helped us discover that effective evaluation isn’t about doing more; it’s about doing what matters with intention and care.

What I find most valuable about this approach is how it honors both the data and the humans behind it. By being “lazy” about unnecessary complexity and “genius” about what truly drives program improvement, we create space for authentic learning rather than just checking boxes.

I’d love to hear your thoughts! Which of these principles resonates most with your work? What strategies have you found to make evaluation more meaningful and less burdensome? Or is there a particular principle you’d like me to expand on in a future post?

This week, the Friday “LinkFest” will be full of resources related to humanizing assessment and evaluation. If you have favourite tools, articles, or frameworks in this area, please share them in the comments!

Until then, remember: you don’t have to measure everything to learn something valuable.

Check out our handout here on the 12 Lazy Genius Principles for Evaluation or scan the QR code below!

Based on the book: Adachi, K. (2020). The lazy genius way: Embrace what matters, ditch what doesn’t, and get stuff done. Waterbrook Press. All quotes are from Adachi (2020)


Discover more from Kyla Flanagan

Subscribe to get the latest posts sent to your email.

Leave a comment