We all know what a good online tutor is like. Of course we do. She’s friendly, approachable and… errr… nice… Yes, quite. It all starts to get a little hazy. How do we judge exactly if someone is a ‘good’ (or not) online tutor or facilitator, apart from our own subjective impressions? What of organisations that employ hundreds of online tutors – how to ensure that everybody is up to scratch? How to ensure accountability? In a nutshell, how to evaluate online tutors?
Traditionally, teaching skills (both f2f and online) have been evaluated with the help of checklists. Nothing wrong with a nice checklist, and there is one for K12 online tutors here. But how does this translate into helping online tutors develop? In other words, how can we help an online tutor whose checklist has a long list of negatives develop as a professional?
I recently attended an EdWeek webinar in which this topic was explored. Bryan Setser of the North Carolina Virtual Public School outlined a five-stage developmental program for evaluation that his organisation implements:
Bryan Setser’s PowerPoint is here .
Briefly, each of these five stages works as follows:
1 Teacher self assessment
Online tutors are given a grid with detailed descriptors to evaluate their own performance. They evaluate themselves for each descriptor.
2 Virtual teaching action plan
Areas identified as needing work in the self-assessment stage are prioritised and structured so as to form an action plan for ongoing professional development. This action plan includes personal goals, institutional / school goals, as well as goals for students.
This consists of a synchronous meeting in a video conferencing platform with coaches/mentors to share the results of the first two stages above, and to clarify and focus the action plan.
4 Spot checks
Mentors/supervisors check tutor performance in the VLE occasionally e.g. by looking at the contact log, course statistics, the gradebook (is it up to date?), the quality of feedback and communication with students…
The areas identified through the four stages above are used for formative evaluation and just-in-time professional development.
A stoplight system for summative evaluation is used :
- red=danger, tutor not competent
- yellow = tutor needs help with x, y, z
- green=tutor doing well
Depending on the results of this, the cycle of stages 1 to 5 may start again. (Or apparently, the tutor may get kicked out of the school if several cycles have already taken place with no improvement)
I found this an interesting approach to online tutor evaluation, with its emphasis on (structured and supported) development rather than assessment. What I especially like is:
- The programme starts with self-assessment, so tutors are encouraged to set their own agendas for professional development.
- Tutors can work on their action plans with colleagues
- The action plans are goal oriented
- Support is offered in the form of mentors/coaches
- A record of each stage is kept by tutors (self-evaluation, action plans, and recordings of the preconference stage)
- The stoplight evaluation (which was new to me) is pretty darn clear!
It seems that although most of this could also be applied to f2f teaching, the online medium does have advantages at certain points. The pre-conference if held face-to-face is unlikely to be recorded. The spot checks could be very disruptive and threatening if they involved bursting into teachers classrooms unannounced! Even announced visits f2f are scary.
So, if you’re an online tutor, or work with teams of online tutors – how do you carry out professional development/evaluation? Do you carry out evaluation at all? What you think of this program? Can you see a use for it in your context? I’d love to hear your thoughts in the Comments section below