6 min read
Telling Your Tutoring Program’s Story Through Assessment: Part 1
By: Marissa Bastian on Apr 19, 2021 3:26:43 PM
Controlling the narrative of your tutoring program is important for a variety of reasons. Quite simply, you (and your team) are best positioned to tell your program’s story, share its successes, and identify areas for growth. To be able to articulate this narrative in a meaningful and compelling way, however, you need to infuse this story with evidence in the form of program data, which can only be discovered through assessment. Throughout this two-post series, we will discuss different approaches for tutoring program assessment, measures you can use to capture learning and success, and how you can incorporate this information into a report to your administration.
There are several critical elements related to conducting effective, comprehensive assessment of your tutoring program. The key is to address each element of assessment as a coordinated effort, with all of the pieces coming together to tell the full story of your program, the important work your tutors do, and the impact that tutoring has on not only the students being tutored but also the tutors themselves.
In this post, we’ll take a look at operational objectives and quantitative measures you can use to evaluate your overall program, assess its efficacy, and gauge satisfaction.
Tutoring Usage and Student Attendance
To demonstrate your program’s success, you’ll want to track attendance to understand the usage of your services. Ensuring you have accurate data is important, so whether you only need to track tutoring numbers or you also offer workshops, review sessions, Supplemental Instruction, or other academic support services, you’ll want to make sure you can accurately report on student usage. You should be able to report on the number of tutoring sessions completed per term and the number of unique students tutored each term. These numbers will show your administration the volume of support you provide and the percentage of the student population that you reach. If your tutoring program is scoped to a specific student group or demographic, you should report on the number of those students served by your program compared to the total number of students in that population.
Looking at these two numbers year over year will show you and your administration a few things. First and foremost, you’ll be able to easily calculate the percent change of each data point year over year, which will show if your program is growing, maintaining, or declining in each of those areas. There is also a story to tell and more to learn from these numbers. For instance, if the number of unique students increases but the number of total sessions decreases, what does that tell you? If the number of students decreases but the number of total sessions increases, what can we learn? What additional questions do we need to ask—and then answer—to better understand this data?
You want to use the usage data to tell part of your program’s story, but it can’t tell the whole story, since it’s only one piece of the whole. In addition to these two data points, you can also report on highest-served courses, professors whose courses have the most sessions attended, and more. What do the courses and professors help you understand? They can guide your future hiring decisions, for one. In addition, it can be interesting to see the courses for which students attend the most sessions and see if they align with the school’s courses identified as "high DFW''; if they don’t, that begs additional questions about why students are attending tutoring for these particular courses.
Student Satisfaction and Value of Services
Another piece of the assessment puzzle that is helpful to include is students’ perception of your services. Gauging student satisfaction to see how they value your services can help tell your program’s story. This information can provide insight into changes you might want to make to improve quality of services, additional training sessions needed for your tutors, adjustments to center hours, and more. Establishing clear processes for gathering this feedback is critical to accruing enough responses to have a sufficient sample size. You’ll want to make sure that you include your tutors in this process, as they will have more direct contact with students and can encourage their participation.
One way to gather consistent feedback about student satisfaction and value is through a session evaluation. Whether you offer in person or virtual tutoring, you can ask students to complete an evaluation after each session—or after a certain number of completed sessions, if you don’t want to overwhelm them. If you choose evaluations after each session, you’ll want to make sure that they’re fairly short and focused; too many questions and the students either won’t take them seriously or won’t bother completing them at all. This consistent feedback will allow you to make any necessary changes throughout the academic year, rather than needing to wait until a “survey period” or a specific touchpoint in the term. You can pivot quickly, depending on the change needed.
In addition to session evaluation, you can also employ student surveys to gather feedback about general satisfaction to see how students value your services. These surveys can be more extensive than the post-session evaluation discussed above, as they aren’t deployed as often. They can ask more intensive questions. Disseminating this kind of survey can be very helpful at the end of each term, as that will provide you with feedback to make any changes before the next term begins. It also ensures that you’ll gather data at least twice a year, which can help you track changes. If you want even more feedback, you can always deliver this kind of survey at mid-term and again at the end of the term. This will allow you a shorter time frame to observe any real change, but also gives you more consistent feedback from students and more opportunities to pivot and ensure quality control.
Retaining and Graduating Students
We know that retention and graduation rates are two of the most critical data points at colleges and universities. Both are used internally to inform strategic planning, curricular decisions, and programmatic changes. The same can be done for your tutoring program using retention rates of students who engaged with tutoring services.
Working with your school’s Office of Institutional Effectiveness (OIE), you can use retention and graduation rates as part of your tutoring program’s story — hopefully it’s a success story. When incorporating student retention and graduation rates, you should determine a minimum threshold for the number of sessions students will have needed to attend in a given term, academic year, or college career to include in the analysis. It’s difficult to imply or try to prove tutoring’s impact on a student’s retention if they’ve only attended a single tutoring session. Consider your program’s structure and any limits placed on students before determining the minimum number of sessions.
Work with your OIE to use that narrowed list of students and see how well they were retained compared to the school’s overall retention rate. If you’ve never looked at graduation rates before, you can retroactively look at previous years to see how many students who attended X number of sessions graduated within 4 and 6 years — or whichever benchmarks your institution prefers. Each year, you can work with OIE to compare a list of graduating students who attended the requisite number of sessions with the institution's overall graduation rate. Again, as with most assessment measures, it’s difficult to prove causality between tutoring and retention or graduation; however, you can suggest correlation between the two, thus inserting tutoring into the narrative of student success and retention at your institution.
Retaining and Graduating Your Tutors
Beyond student retention, another compelling data point to consider is the retention and graduation rates of the peer tutors themselves. Anecdotally, we know the impact that tutoring has on the tutors doing the work — from the training they receive, the help they provide, and the course mastery they achieve to the leadership skills they learn and the intellectual community they become a part of. This impact is often forgotten by university administrators, but it is just as important as the impact of tutoring on their students. Some tutors will view their job as just that: a job. However, other tutors will find their place in the tutoring center, with their peers, with their mentor — it may even become the connecting thread between them and the institution as a whole.
Work with your OIE to analyze the retention and graduation rates of your peer tutors against the institution's overall rates to see how they stack up. Without personal testimony from the tutors themselves about the impact of being a tutor on them staying at the institution or graduating on time, you can’t necessarily prove causality (as mentioned above); but, you can use this information to make a case of correlation and a viable factor on that tutor’s retention. Don’t worry, we’ll talk more about utilizing feedback from tutors in the second post of this series.
What's Next?
There are many resources available for helping you begin or enhance your assessment process. The Council for the Advancement of Standards (CAS) provides myriad resources to guide you through extensive assessment of your learning assistance program through their LAP Standards and self-assessment guide. In addition, you can access the College Reading and Learning Association’s white paper on assessment for free here. CRLA’s second white paper, entitled “Assessment of Learning Assistance Programs: Supporting Professionals in the Field” by Dr. Jan Norton, University of Iowa, and Dr. Karen S. Agee, University of Northern Iowa, is an excellent resource for better understanding assessment, types of assessment methods, and best practices.
In our next blog post about assessment we’ll discuss student learning outcomes for both tutors and students being tutored, direct measures, and indirect measures. No matter where you are in your assessment journey, it’s important to remember that data is dynamic and should always be used to ask additional questions — it does not necessarily provide a static answer, nor should it. Data should be used to drive inquiry, further study, and a deeper investigation into the “why” or “how” of a program. Often, data is the starting point, not the end point.
Curious how Knack can help you enhance your data collection processes to unlock new insights into your tutoring program? Head over to partner.joinknack.com to connect with our team.
Related Posts
Assessing Your Tutoring Program Outline: Part 2
In our last blog post on assessment, we discussed operational objectives and student surveys. In this companion post, we’ll delve into student...
The 21st-Century Role of Peer Tutoring Programs in Higher Ed
The 21st century continues to evolve in ways most of us wouldn’t have predicted in our wildest imaginations.
Demand-Driven Support as an Alternative to Traditional Staffing Practices
Trying to prepare your tutoring or learning center for the upcoming academic year can feel a lot like trying to use a cloudy crystal ball.