Designing Data and Continuous Improvement Systems

Click Here to go to the Accelerator-Designed Tutoring Survey Instruments

Overview: How do data drive improvement?

Collecting and analyzing data lets you make improvements in a targeted, strategic way. It allows your program to:

  • Measure progress towards tutoring goals and build in opportunities for reflection.
  • Make important information more accessible and digestible by gathering it in one place.
  • Avoid reinventing data-collection strategies every year.
  • Know exactly how to structure your data-collection tools (e.g., what survey questions to ask).
  • Set up an integrated way for your organization to intentionally review, tweak, learn, and improve.
  • Preserve information architecture and maintain implementation quality as your program expands and/or founding staff are promoted from their original roles.

How do you design a measurement system to track progress towards your goals?

It is important to have a measurement plan whether you partner with a tutoring provider or grow your own program. If you partner with a provider, you will need to collaborate closely to develop a cohesive measurement system that provides both your district and the tutoring program with the information needed for continuous improvement. In many cases existing data and/or data systems can be used or adapted.

All measurement plans should outline how to assess progress towards making impact whether that impact is defined in your contract with your provider or defined in the benchmarks from your own program’s Logic Model. Your measurement plan is a reusable, consistent roadmap for finding rigorous answers to questions like “Are we on track?” or “What are we doing well?” or “How can we improve?”

Start with your goals and work backwards.

Metrics should not exist for their own sake. Instead, every metric your program measures should shed light on whether a specific goal in your provider contract or Action laid out in your Logic Model is being implemented effectively enough to create its intended Outputs and Impact. For each goal or Output and Impact, establish the criteria for success. Ask yourself:

  • How would we know that we had accomplished this objective?
  • What would need to be done for this ideal outcome to happen?
  • What benchmarks would we need to hit along the way to know that we are making progress?

Distinguish between process metrics and impact metrics.

Process Metrics

  • Collected continuously during implementation
  • Used to monitor progress and adjust accordingly
  • Ask “Are you doing what you set out to do?”
  • Derived from Logic Model Outputs

Impact Metrics

  • Collected cumulatively after implementation
  • Used to summarize and report performance
  • Ask “Did your work have the intended effect?”
  • Derived from Logic Model Impacts

Determine how to measure each metric.

Start by asking whether you want to gauge the quality or quantity of each goal or Output and Impact, then list data points that would help assess what you intend to measure. Determine how frequently you will collect these data, and choose an appropriate tool that can capture them accurately. Finally, set expectations or benchmarks for each metric at each relevant time interval. For more detailed guidance, see Developing a Performance Measurement Plan and the associated Template.

What kinds of data-collection tools do you need?

Depending on the metrics you choose, different data-collection tools may be necessary to monitor progress. As a reminder, first attempt to use or adapt existing data collections systems, rather than designing something new. This checklist can help get you started; it lays out the pros and cons of different data-collection tools:

  • Records and Checklists
    • Capture which elements of the program are being implemented as designed
    • Good for quantitative data including attendance rates
    • Does not capture quality or root causes
  • Rubrics
    • Clarify and codify standards and provide concrete steps for improvement
    • Require significant time investment to ensure consistent application of more subjective rubric strands
  • Surveys
    • Compare subjective experiences in a standardized and quantifiable way both during and after implementation
    • Lacks nuance and can be unrepresentative if response rates are low
  • Interviews
    • Provide more nuance
    • Are far more time-consuming to conduct and evaluate at scale (consider targeted interviews based on survey responses or interviewing a representative sample)
  • Standardized Test Scores
    • Useful and efficient for consistent measurement of student academic growth, measuring skill deficits before tutoring and documenting academic growth throughout the program
    • May provide an incomplete picture of students’ understanding of complex concepts
  • Student Work Samples
    • Provide a detailed picture of student mastery
    • Are time-consuming to evaluate (consider evaluating work samples for a subset of students based on performance on standardized tests)

For more detailed guidance, see Data-Collection Tools, Tutoring Survey Instruments (including survey templates), and Tutoring Surveys Brief (including examples).

How will you demonstrate impact to various stakeholder groups?

Develop a holistic measurement strategy, including non-academic measures based on stakeholder priorities.

  • Academic growth is usually the primary goal for high-impact tutoring, but not the only goal. Collect data across multiple dimensions in order to assess the effects on a broad array of outcomes including, for example, attendance and grades. Use surveys to qualitatively evaluate student experiences with tutors. Compare results for different student groups to ensure that your program is serving students equitably.
  • Collect feedback from stakeholders (students, families, teachers, and administrators) to understand and improve program impact at all levels. While achievement data and feedback from school partners are critical, you should always include student voices when evaluating program impact.

 Develop systems for visualizing data for stakeholders.

  • Develop in-house capability for distilling data so that information can be presented in a digestible and actionable format aligned with stakeholder priorities. Some programs may have large databases and utilize software such as Tableau to visualize data, while other programs that operate at a smaller scale may find it sufficient to store data in well-designed Excel or Google Sheets spreadsheets.
  • The method you choose for visualizing data should allow users to sort the data and easily extract insights. Regularly (cadence will depend on the specific tool, please see more details here) gather feedback on your data-collection and visualization systems and improve upon these systems as part of your continuous improvement processes.