Thank you for article!
Full paper
This paper studies how implementing video-based formative assessment using an online video annotation tool remediates an existing approach to formative assessment and exam preparation in a clinical optometry module. Using a case study methodology and an activity theoretical framework, and incorporating Engeström’s notion of expansive learning, it analyses uses of the tool and practice change among a group of 20 undergraduate students. It highlights the contextual factors that inform the development of the new activity system and the multiple contradictions inherent in it. In doing so, it contributes to the literature on student experiences of video-based formative assessment in health HE. It finds that students are likely to experience the benefits of the approach, such as supportive, objective, and contextualised feedback, if tutor and student roles are clearly communicated and tutors are flexible, adopting either more tutor-facilitated or more student-centred approaches to peer assessment to suit the needs of different peer groups. Using example videos and posts to scaffold reflection and feedback might reduce some of the affective barriers to participation and build group dynamics and interaction.
Keywords: Video annotation; video-based formative assessment; self-reflection; peer feedback; health HE
Part of the special issue Activity theory in technology enhanced learning research
In recent decades, there has been a shift away from characterisations of student learning in HE as teacher transmission of information towards a conceptualisation of student-centred learning and active knowledge construction (Nicol and Macfarlane-Dick, 2006; Wheatley et al., 2015). This shift has been followed by increased use of formative assessment, an assessment approach that involves not only teachers, but also students themselves and their peers (Boud, 2000), and which is “specifically intended to provide feedback on performance, to improve and accelerate learning” (Sadler, 1998, p. 77).
It is widely acknowledged that effective feedback is critical to formative assessment (Carless, 2006; Hattie and Timperley, 2007; Nicol and Macfarlane-Dick, 2006). It helps to clarify what good performance is, through criteria and expected standards; it facilitates the development of self-assessment or reflection in learning; it delivers high quality information to students about their learning; it encourages teacher and peer dialogue around learning; it encourages positive motivational beliefs and self-esteem; it provides opportunities to close the gap between current and desired performance; and it provides information to teachers that can be used to help shape teaching (Nicol and Macfarlane-Dick, 2006). However, relatively little attention has been paid to student experiences of formative assessment, particularly those mediated by video.
The use of video for formative assessment and feedback is not new (Evi-Colombo et al., 2020; Fukkink et al., 2011; Hammoud et al., 2012). Since the mid-60s, video feedback has been used in professional and HE settings, particularly in the fields of healthcare and education. The past decade and a half witnessed the development of video management systems, including the Hong Kong Polytechnic University’s uRewind, a multi-platform application powered by Panopto software. These applications enable students to record themselves and analyse, reflect on and discuss their recordings, using video annotation tools to write time-stamped comments (Evi-Colombo et al., 2020). Although some recent studies have explored the use of video annotation for procedural clinical skills training (Evi-Colombo et al., 2020; Cattaneo et al., 2020; Truskowski and VanderMolen, 2017), they have mainly employed experimental or quasi-experimental designs which do not capture the nuances of the impact of the technology on student learning.
Neither video-based formative assessment more broadly nor the specific video annotation tool in uRewind are widely used at the University. I therefore welcomed the opportunity to work in partnership with two tutors from the School of Optometry to introduce the novel approach and tool into an established activity system, slit-lamp examination practice. This procedure, where the optometrist uses a low-powered microscope with a high-intensity light to examine all areas of the eye, is assessed in the practical exam for a Year 2 clinical optometry module. Traditionally, students practise their slit-lamp examination skills in small groups in timetabled lab sessions. Students take turns to receive feedback from the tutors in real time as they move between groups. Although lab sessions continued to take place under Covid-19 in 2020 and 2021, strict social distancing measures meant students could not work with, or even observe, peers outside their immediate group. My intervention involved students making recordings of their slit-lamp examination practice, and using the annotation tool, first to write self-reflective comments and questions on their own recordings, then to post feedback or responses on their peers’ videos. It was a formative intervention (Sannino et al., 2016): not solely to explore students’ experiences of the approach and tool, but also to embed new forms of practice that could facilitate learning despite social distancing. In the longer term, I also hoped to be able to use the experience to design video-based interventions in other subjects within my institution.
This paper studies how implementing video-based formative assessment using the online video annotation tool remediates an existing approach to formative assessment and exam preparation (Letorsky, 2009, in Bligh and Coyle, 2013). Remediation is analysed using a case study which incorporates an activity theoretical framework (Scanlon and Issroff, 2005) alongside Engeström’s (2001) notion of expansive learning. Its purpose is to understand the contextual factors that shape the impact of the approach and tool on student learning. In doing so, it contributes to the literature on student experiences of video-based formative assessment in health HE.
In this section I review the literature on video-based formative assessment for procedural skills in medical and health HE. To select relevant works, I searched Google Scholar and OneSearch using a profile that combined the keywords procedural skills, formative/self-/peer assessment/ evaluation, reflection, feedback, video, video annotation, and health HE. Studies where ‘video feedback’ refers to feedback given in video format were excluded. I used the snowball method to search relevant literature for references to other studies and selected those I felt had most in common with my context.
In studies of video-based self-reflection in health HE, there is broad agreement that video with self-evaluation alone is not sufficient. To be more effective than traditional feedback methods, it should also incorporate either formative tutor feedback (Bowden et al., 2011; Farquharson et al., 2013; Lewis et al., 2020; Perron et al., 2016; Spence et al., 2016), feedback from specialist professionals from a related discipline (Parker et al., 2019), a videoed benchmark performance demonstration (Hawkins et al., 2012) or other e-learning materials (Donkin et al., 2019). Vital to this is faculty development (Hammoud et al., 2012). Future studies might, therefore, benefit from greater tutor involvement in the research design.
Studies of video-based approaches that incorporate peer assessment are fewer in number. Paul et al. (2002) and Hunukumbure et al. (2017) each examine uses of peer video feedback within tutor-facilitated sessions. Despite “some distress” associated with viewing their “actual image” on video, students in Hunukumbure et al. (2017) engaged in “open discussion” in a “supportive environment”, while in Paul et al. (2002), students perceived the approach to be so helpful that several of them sought out further opportunities to be recorded and reviewed. By contrast, the participants in Seif et al. (2013) and Chen et al. (2019) experienced more student-centred peer assessment approaches. In the former study, students worked independently in groups to create, and self-record, simulated patient histories before engaging in scaffolded peer assessment (Seif et al. 2013). In the latter, students benefited from a custom-built clinical skills peer assessment platform which facilitated active, collaborative learning and raised exam performance (Chen et al. 2019). Further research is needed to explore the effect of tutor-facilitated vs. more student-centred approaches on the learning experience, particularly with regard to affective factors and group dynamics. This, too, would benefit from a co-designed intervention.
In a comprehensive review of the literature, Evi-Colombo et al. (2020) define video annotation as a web-based system which integrates video playback and time-based text commenting, and allows videos and comments to be shared, “usually with the aim of analysing and reflecting on the content and fostering deeper engagement with [instruction]” (p. 197). It enables students to participate individually and collaboratively and, through interacting remotely, “circumvent the difficulties of large group discussions and social face-threatening situations” (p. 199). Based on an analysis of 40 studies across a broad range of disciplines, the authors find that video annotation facilitates increased reflection on action by “anchoring peer or self-generated feedback to objective, evidence-based practices, captured on video” (p. 216). This requires careful instructional design and scaffolding: prompts, rubrics and guidelines (Evi-Colombo et al., 2020).
There is as yet very limited research on video annotation for procedural skills training, and the majority of extant studies address tutor feedback rather than self-reflection or peer assessment. Truskowski and VanderMolen (2017) suggest that annotated tutor video feedback significantly improves student learning, provided that the video content and feedback are linked to learning outcomes. Despite this, students’ perceptions of the feedback approach are mixed (Truskowski and VanderMolen, 2017), with some finding it less immediate or less thorough and direct when compared with traditional feedback, offered while students are actively practising. Cattaneo et al.’s (2020) findings are more unambiguously positive: annotated video feedback is perceived as more supportive, “evidence- and situation-based”, “student-driven and dialogical”, leading to greater student acceptance (Cattaneo et al., 2020, p. 6). It would clearly be worth investigating the necessary preconditions for these benefits, and how they are experienced, particularly with regard to perceptions of the immediacy, supportiveness or objectivity of the approach. It would also be valuable to explore these themes in uses of video annotation for self-reflection and peer assessment.
Though they focus on vocational education and professional learning rather than HE, Cattaneo and Boldrini (2017) and Ho et al. (2019) each make valuable contributions to the literature. In one of Cattaneo and Boldrini’s (2017) four case studies, a trainee nurse and her two tutors use video annotation to engage in a process of “contextualised, specific and objective feedback” on professional practice, where the technology is used to systematically analyse procedural errors and participants come to reconceive of them as opportunities for professional development and personal growth (Cattaneo and Boldrini, 2017, p. 367). The study is also noteworthy because it is unique among the literature on video-based formative assessment in adopting a design-based research methodology, where teachers and researchers co-designed an intervention. Ho et al.’s (2019) study, meanwhile, is currently the only analysis of peer feedback on procedural clinical skills using video annotation. The five rheumatologists who took part in their research reported that the video-based approach provided “accurate, detailed information” in a “more convenient, less intrusive, manner than direct observation” (Ho et al., 2019, p. 6). It would be beneficial to test these findings with a larger sample size in an HE context.
Besides Cattaneo and Boldrini’s (2017) above-mentioned case study, all other studies of video-based formative assessment are experimental or quasi-experimental. They are therefore unable to evaluate the impact of these emerging video technologies on students’ existing learning and assessment experiences, or, to use activity theory terminology, evaluate how novel ‘tools’ can ‘remediate’ an existing ‘activity system’ (Bligh and Coyle, 2013, p. 338). Quasi-experimental and experimental studies cannot provide any analysis of how existing activity systems develop in the first place, which is highly problematic, given that these ‘culturally entrenched’ systems will influence how video-based formative assessment is implemented (Bligh and Coyle, 2013; Bligh and Flood, 2015). My first research question is therefore:
RQ1: How does the introduction of video-based formative assessment remediate a culturally entrenched activity system (students practising carrying out a slit-lamp examination to prepare for the final practical exam in a clinical optometry module)?
Change in activity systems is driven by ‘systemic contradictions’, experienced as conflicts and tensions, which people strive to overcome through changing their activity systems (Virkkunen and Newnham, 2013) in a cyclical process of ‘expansive learning’ (Engeström, 1987). Change and expansive learning may be brought about through collaboration between practitioners and interventionist researchers (Engeström, Rantavuori and Kerosuo, 2013). Since this process has not previously been applied to video-based formative assessment, my second research question is:
RQ2: What contradictions/tensions are there in the implementation of video-based formative assessment to develop clinical optometry skills? How might these be overcome?
To help analyse the remediation of the existing activity system and highlight contradictions in the implementation of video-based formative assessment, this study uses Scanlon and Issroff’s (2005) activity theory-derived framework for evaluating learning technology in HE. It consists of five evaluation factors (pp. 434-436):
Interactivity: How does the tool meet subjects’ expectations about interactions between students and teachers (rules), and the division of responsibilities between students and teachers (division of labour)?
Efficiency: How can participants use the tool to achieve (usually contradictory) desired outcomes without wasting time or effort?
Cost: How do perceived costs of using the tool change the rules of practice?
Failure: How do unforeseen problems with the tool affect subjects, the community, the rules of engagement or the division of labour?
Serendipity: How do subjects’ expectations (rules) affect perceptions of any accidental discoveries made using the tool, and how might this influence the dynamics of control (division of labour)?
For Scanlon and Issroff, activity theory offers educational developers and researchers a “more focused” perspective on learning contexts, exposing underlying interactions and contradictions to foster a “complex, comprehensive understanding” of learning technology use (p. 438). Their framework is intrinsically related to the concept of expansive learning, outlined below, since it also involves “preliminary analysis of interactions within an activity system [as] a prerequisite for the instructional design phase and the detailed planning of an evaluation” (p. 438). Central to the framework is understanding the culture and context of the learning situation; and, again, in common with expansive learning, it enables practitioners and researchers to challenge existing conceptualisations of culturally entrenched practice.
This section describes the research approach, participants, setting, instruments, procedure, and methods of data collection and analysis used.
My approach is an interventionist case study of practice change, in which I intervene in an activity system to support expansive learning (Engeström, 1987, 2001; Sannino et al., 2016). This occurs through a series of expansive learning actions: questioning, historical analysis, actual-empirical analysis, modelling, examination, implementation, process reflection and consolidation and generalisation (Bligh and Flood, 2015; Engeström, 2001; Sannino et al., 2016). While this approach shares some characteristics with the Change Laboratory (Bligh and Flood, 2015; Engeström, 2001; Sannino et al., 2016), it cannot be classified as such since it does not involve all stakeholders working together in workshops to change their own activity system (Virkkunen and Newnham, 2013). Instead, expansive learning actions took place through informal discuss-ions and interviews with tutors and student surveys, as summarised in Table 1.
Expansive learning actions | Purpose | Instruments used |
---|---|---|
Questioning | Identify practice-problems being described | Field observation |
Analysis: historical | Investigate and represent the structure and history of the present situation | Survey A |
Analysis: actual-empirical | Further develop representations of the existing activity system | Field observation |
Modelling | Construct a new activity system | Interview A |
Examination | Better understand the dynamics, potential and limitations of the new activity system | Interview A |
Implementation | Render the model more concrete by applying it practically and conceptually, enriching it | Video annotation |
Process reflection | Evaluate the current process, generating critique and identifying further requirements | Survey B |
Consolidation | Attempt to embed stable forms of new practice | Interview B |
Table 1. Summary of expansive learning actions, purposes, and instruments used
Study participants were two tutors (clinical associates) and 20 Year 2 students taking a clinical optometry module as part of a BSc (Hons) in Optometry programme. Learning on this module is assessed via a practical exam where students perform slit-lamp biomicroscopy (examination) on a simulated patient. Students have opportunities to practise the procedure during weekly lab sessions, where they are supervised by the tutors.
Instruments used at different stages of the research included field observation, student surveys, tutor interviews, and video recordings and annotations, captured as part of students’ formative assessment activities within the uRewind platform. This subsection outlines how each research instrument and data collection method supported expansive learning.
Field observation took place as a precursor to the questioning and analysis phases. This quasi- ethnographic process involved me visiting the optometry lab to observe the tutors and students taking part in typical learning activities. Through informal conversations with the tutors, I was able to gather mirror data (Engeström, 2008) on dilemmas or problems they and their students were experiencing, to use during questioning and analysis.
The first of two tutor interviews, Interview A, addressed questioning, analysis, modelling, and examination. One tutor, T1, and I discussed dilemmas and problems identified during the field observation and in informal conversations with her colleague, T2. I then shared a model of the existing activity system for student slit-lamp exam preparation (SLEP0) that I had drafted using mirror data gathered during field observation. During Interview A, T1 and I made revisions to this model, which is shown in Figure 2. Using this model as mirror data, our discussion moved on to (historical) analysis, exploring how the existing activity system had developed in part through the contradictions imposed by Covid-19 and social distancing. This then led on to the action of modelling an activity system remediated by video-based formative assessment (SLEP1, shown in Figure 3), in which we identified desirable components of the new system. Interview B took place as part of process reflection. Guide questions are in Appendix A.
Students completed the first of two surveys, Survey A, as part of the questioning and analysis phases. Having attended a presentation about video-based formative assessment and uRewind in which they could practise using the video annotation tool, students answered six open-ended questions about their current practice, challenges they had experienced, and how the tool might benefit them (in Appendix B). In this case, asking students to consider problems and challenges was a first-stimulus, while the tool was a second-stimulus. Survey A data was then incorporated into the SLEP0 model, to support actual-empirical analysis of the existing activity system, and the SLEP1 model, to inform the modelling of the future remediated system. Students completed Survey B as part of the process reflection phase. It comprised seven open- ended questions about their impressions of using the tool for formative assessment, how it had remediated their practice, and their planned future use of the tool (Appendix B).
T2 helped students to create first-person video recordings of themselves carrying out slit-lamp examinations on simulated patients (other members of their group) in the optometry lab. Each recording was then uploaded to uRewind and shared with both tutors, me, and the student who had made the recording. When each student had annotated their video with time-stamped self- reflections, they formed a group of four and shared their recording with their groupmates, who added time-stamped peer feedback within the video. An example is shown in Figure 1.
Data from Survey B, Interview B and the video annotations were analysed using the criteria set out in the previous section, with reference to the activity system models SLEP0 and SLEP1, to identify how students’ practices had been remediated. This was discussed with T1 and T2.
Ethical approval was obtained from Lancaster University on 24 February 2021.
This section presents an analysis of the activity system as it existed prior to the interventionist research (SLEP0), followed by a model of the new, remediated activity system (SLEP1), based on data from the field observations, surveys, interviews and annotations outlined above. It then uses Scanlon and Issroff’s (2005) framework to analyse tutors’ and students’ experiences of the implementation of SLEP1, identifying how students’ practices have been remediated.
Early field observations of SLEP0 found subjects (students) using the tools of the slit-lamp and a checklist of evaluation criteria to work towards their object of developing their confidence in clinical optometry practice. Their activity took place within groups of three, under supervision of the tutors. Since each group worked separately during the activity, and did not interact with other groups from the cohort, other cohort members cannot be said to have formed part of the subject’s community. In Interview A, T1 revealed that this rule, effectively limiting the size of the community by keeping group composition unchanged, was a recent historical development introduced to observe social distancing guidelines and minimise the risk of infection within the cohort. Other rules mediated the subject-community relationship. Due to the number of groups in each class, students had to wait for long periods to receive tutor feedback on their slit-lamp exam practice. Since students had to be supervised and the feedback process required tutors to be physically present, the activity could only take place during the timetabled lab session. The division of labour was such that the students within each group took turns to perform the roles of optometrist, patient and observer, with the tutor providing feedback on their practice. Their intended outcome, T1 noted, was not simply to pass the practical exam but also to develop the proficiency required to practise on real patients. Until then their clinical practice was limited to simulated patients, performed by the two members of their lab group.
Thematic analysis of responses to Survey A identified two general practice-problems, both of which stemmed from the activity system analysed above. Several students felt that they lacked opportunities to practise, and this undermined their confidence:
“Not enough practice, not enough confidence in my own practical skills” (S3)
“I am a slow learner, and I need more time to digest and practise the things I have learned, but time in the lab is limited.” (S8)
This is represented in Figure 2 as a secondary contradiction (Engeström, 1987) between the subject and the rules of SLEP0 (1).
For others, the greater problem was not having sufficient opportunities to receive feedback on their practice that might help them improve:
“I’m not sure about whether I’m doing everything correctly when practising. I try to seek help from the tutors, but they may be busy assisting other classmates.” (S2)
This is shown as a secondary contradiction between the subject and the division of labour (2).
Using the activity system representation developed in the analysis stage and field observations as mirror data, in the second part of Interview A with T1, I presented a model of the proposed remediated activity system, SLEP1 which is shown in Figure 3. In SLEP1, students’ activity would be mediated by new tools: their own videos and their peers’ videos, both of which would be annotated with reflections and peer feedback by students using the uRewind discussion tool. The online platform would set new rules: it would allow students to observe their own practice, or that of their peers, anytime, anywhere, in groups of their own choosing; and if students were later willing to share the recordings outside their immediate peer group, there would no longer be the same public health restrictions on them observing all other participating members of the cohort as there were in SLEP0. This expanded the potential size of the community in SLEP1. The division of labour would also change: students would be expected to perform the tasks of self-reflection and peer feedback. Doing these in written English using a video annotation tool would be a novel experience for them. To support students, T1, T2 and I agreed that students should have time to reflect first before engaging in peer feedback; while students would still be encouraged to refer to the evaluation checklist, as in SLEP0, they would not be required to grade themselves or their peers. It was also agreed that the tutors would give each student time-stamped feedback on their video.
Having questioned and analysed the previous activity system, and modelled and examined the planned remediated system, the study uses Scanlon and Issroff’s (2005) framework to analyse participants’ experiences of its implementation. Through process reflection, based on Survey B and Interview B, it identifies how practices have been remediated and highlights contradictions in the remediated activity system.
In informal conversations in the early stages of implementation, T1 reported that students were reluctant to begin engaging in self-reflection and peer feedback. To address this, she sent email reminders to all 20 students and posted one time-stamped feedback comment on each student’s recording: “a bit of a push to get [them] started” (T1). This led to a burst of (inter)activity over a 48-hour period.
Table 2 shows students’ interactions, as measured by posts within the uRewind platform, up to and including this period. Interactions with students’ own content, mediated by the discussion tool, are categorised as either reflections (in the form of comments, questions and responses to peers’ comments) or feedback (in the form of independent comments or responses to students’ initial reflections).
Group | ID | Reflection: Comment | Reflection: Question | Reflection: Response | Feedback: Comment | Feedback: Response |
---|---|---|---|---|---|---|
A | S1 | 2* | 2*** | 9*** | 0 | 2** |
| S2 | 8** | 1** | 0 | 4* | 3** |
| S3 | 8** | 2*** | 2** | 24*** | 6*** |
| S4# | 4* | 1** | 1** | 12*** | 4** |
|
| 22 | 6 | 12 | 40 | 15 |
|
|
|
| 40 |
| 55 |
95 | ||||||
B | S4# | 4* | 1** | 1** | 12*** | 4** |
| S5 | 6** | 0 | 1** | 0 | 0 |
| S6 | 7** | 0 | 2** | 4* | 1** |
| S7 | 5** | 0 | 5*** | 1* | 2** |
|
| 22 | 1 | 9 | 17 | 7 |
|
|
|
| 32 |
| 24 |
56 | ||||||
C | S8 | 4* | 0 | 7*** | 7** | 3** |
| S9 | 11*** | 0 | 2** | 10*** | 0 |
| S10 | 4* | 0 | 0 | 7** | 5*** |
| S11 | 10*** | 0 | 0 | 8** | 0 |
|
| 29 | 0 | 9 | 32 | 8 |
|
|
|
| 38 |
| 40 |
78 | ||||||
D | S12 | 6** | 0 | 0 | 2* | 0 |
| S13 | 1* | 0 | 0 | 0 | 0 |
| S14 | 8** | 0 | 0 | 2* | 0 |
|
| 15 | 0 | 0 | 4 | 0 |
|
|
|
| 15 |
| 4 |
19 | ||||||
- | S15 | 1* | 0 | 0 | 0 | 0 |
| S16 | 4* | 0 | 0 | 0 | 0 |
| S17 | 13*** | 0 | 0 | 0 | 0 |
| S18 | 7** | 2*** | 0 | 0 | 0 |
| S19 | 3* | 0 | 0 | 0 | 0 |
| S20 | 0 | 1** | 0 | 0 | 0 |
Table 2. Levels of interaction as measured by posts within the uRewind platform
***indicates a high level of interactivity in relation to the cohort for this type of post, **moderately high, and *relatively low. #S4 interacted with students in Groups A and B.
Students’ expectations about the process of self-reflection using uRewind were largely met. For example, S3 and S6, who regarded self-reflection as a means of positive improvement, posted relatively high numbers of reflections, each of which focused on actions that they could take to enhance their performance:
“I should control the focus better here so the iris is constantly in focus during scanning.” (S3)
Meanwhile, S17 and S18, who had hoped that self-reflection could help identify errors in their practice, posted comments that were highly critical.
Students’ expectations about peer feedback were also broadly met. Groups A, B and C were all characterised by high levels of interactivity, with A achieving a particularly high level through the use of questioning, leading to discussion. In Group A, S1, S2, and S3 had all expressed the strong desire “to gain feedback from others”, receive “useful comment[s]” and “learn from each other”, and this was borne out by their experience. One contradiction in terms of the division of responsibilities is S1, who had critiqued SLEP0 for its lack of feedback opportunities, but then did not provide much feedback on his peers’ recordings despite engaging in discussions about his own recorded practice.
In Groups B and C, S6, S9, and S10 appeared to have made a conscious effort to acknowledge steps that their peers had performed well; and where they had not, they offered suggestions for future improvement. This reflected their expectation that interactions within uRewind ought to “enhance students’ confidence”. By contrast, comments from some other students, notably S12 and S14, came across as bluntly negative, highlighting steps their peers had not performed well or omitted. These comments did not elicit any responses. There was, as T1 noted, a distinction between students who would simply “point out mistakes” and those who could “fix them”.
In Survey B, several students commented on the leanness of the text-based interactions within uRewind, reflecting that discussions could be enriched through the use of emoticons or photos (S8, S16), a ‘like’ function (S2), or the ability to insert audio and video (S2, S19).
Since staff and students access uRewind through an institutional licence, no financial costs are associated with its use. Instead, perceived costs may be measured in tutors’ and students’ time or effort. It therefore makes sense to consider cost together with efficiency: how the tool is used to achieve desired outcomes without wasting time or effort, and how any costs in terms of time or effort change the rules of practice.
S6 and S7, who had both commented (in Survey A, above) on the potential efficiency of video annotation, ultimately made effective use of the tool in commenting and responding in order to achieve their desired outcomes: to learn from others (S6) and “understand the procedure better through discussion” (S7). As with other students in Groups A-C, their time-stamped comments proved highly efficient in focusing their peers’ and tutors’ attention on particular aspects of the procedure. While the time-stamp feature was singled out for praise by most students in Survey B, some (S3) found it confusing that uRewind did not display comments in the order in which they appeared in the recording. Others found it frustrating that the platform did not notify them when someone had posted, causing them to waste time checking for updates (S3, S9).
Since it was not possible for all students to create their recordings during class time, several of them had to come to the lab in their free time to record and upload their videos with assistance from T2, also in his own time. Since the lab computers were not connected to the internet, each video needed to be uploaded to uRewind manually from the digital camera; and here, the small size of the memory card meant that each video had to be recorded and uploaded in two parts, an unexpected inefficiency imposed by two pieces of lab equipment. Students also remarked that the digital camera produced much darker images than expected, limiting their ability to analyse each video.
T1 reflected that these inefficiencies may have deterred several students from taking part in the project, as the time ‘cost’ outweighed the perceived benefits of participation. For S1-20, it also meant opportunities to reflect and receive feedback on their performance were delayed to such an extent that by the time they could engage in them, their skills had already improved. T1 also noted that unlike in SLEP0, students could not immediately apply the insights they had gained from the activity. There was therefore a risk that some learning might be lost.
Though there were no reported instances of tool malfunction in the implementation stage, there are two observable ways that participants’ unintended use of uRewind impeded their ability to engage with it for self-reflection or peer feedback. S10, S12 and S14 each added reflections on multiple aspects of their performances in single posts at the start of their recordings, instead of using the discussion tool to post individual time-stamped comments. This made it challenging for peers to respond meaningfully to each reflection or understand reflections in context. Seven students, S13 and S15-20, did not engage in peer feedback, and S15-20 chose not to share their videos at all. This particular ‘failure’ reduced the effective size of their community, and denied them opportunities to benefit from peer support or group discussion. This can be understood as a tertiary contradiction: it is possible that these students expected feedback on their practice to come from T1, as specified by the division of labour in the earlier SLEP0, not another student. There is also evidence from Survey B that this subset of students benefited from self-reflection to such an extent that they did not feel the need to share their video.
In Interview B, T1 raised another broader failure: the scope within a peer assessment approach for students to post reflections or feedback that might be misleading or inaccurate. While most comments were correct, there were “a few instances where students agreed that something was good, and it definitely was not, or where they added a suggestion [for improvement] when the student had [performed the procedure] very well”. T1 therefore felt under greater pressure than expected to review each video in full: “I’m afraid I’ll miss something, and students will assume they’re doing it correctly because I didn’t comment”. Instructor feedback, she felt, was critical in “identifying common errors and misconceptions”, but had added to her teaching load.
One final failure of this project, identified by T1, was the decision of 24 students in the cohort of 44 not to take part. Non-participants, she observed, were generally weaker students who did not feel confident performing the full procedure and committing it to video. The outcome was a widening attainment gap between these students and the already more proficient S1-20, who improved their performance further through formative assessment.
For several students, the greatest ‘accidental discovery’ made using uRewind and video-based formative assessment was that the peer feedback they received was more supportive and more effective than they had expected (S8, S9). This increased their confidence and enabled them to clarify prior misconceptions about the procedure in a way that was not possible in a traditional lab setting. Other students ‘discovered’ further potential uses of uRewind which they hoped to explore, such as developing presentation skills (S6) or taking case histories (S10).
T1 was pleased to find herself and her students referring to the recordings during the lab, with the realisation that these artefacts could have value beyond the platform and project, and could be edited into a teaching and learning resource.
This section summarises three key contradictions identified within the remediated system. With reference to the ‘bodies of knowledge’ identified in the literature review, it discusses how each contradiction might be resolved. It also shows how the study contributes to the literature.
Students’ reflections and data on their (inter)activity within uRewind showed that while some engaged actively in self-reflection and peer feedback, others preferred to limit their activity to the former; others engaged little in either, expecting T1 to perform this function. This may have stemmed from students’ lack of confidence in self or peer assessment or unfamiliarity with the platform, but it is also plausible that their expectations were shaped by the division of labour in SLEP0, in which T1 and T2 played a central role. While T1 noted that her feedback was critical to the success of the activity (as discussed in Carless, 2006; Hattie and Timperley, 2007; Nicol and Macfarlane-Dick, 2006), it was largely through students’ self-reflection, peer feedback and discussions that she was able to identify and clarify their misconceptions, enabling learning to take place.
It is likely that students with less experience of video-based peer feedback would benefit from a more tutor-facilitated approach (Hunukumbure et al., 2017; Paul et al., 2002), enabling them to develop the confidence and skills needed to engage in student-centred peer assessment (Seif et al. 2013; Chen et al., 2019). In this study T1 demonstrated flexibility, adapting her approach to suit the needs of individual students, intervening when necessary to scaffold reflections and peer feedback. This is testament not only to her engagement in staff development (Hammoud et al., 2012) but also her involvement in the research design. By monitoring learning analytics in uRewind and communicating with students, her interventions proved effective. Future models and implementations of SLEP would require a similar degree of tutor involvement and greater clarity around what is expected from tutors and students in formative assessment. Co-designed research such as the current study, involving tutors and students, will continue to address gaps in the literature on experiences of video-based self and peer assessment in health HE.
Students were often hampered by their unfamiliarity with existing functions in uRewind, such as the time-stamp feature, central to this activity, or the ability to subtitle or edit videos, which could have made it more efficient or interactive, allowing more of the rules to be followed. Yet there were also platform shortcomings that impeded interactivity: the confusing way posts are displayed, the absence of notifications and lean, text-based communication. This, compounded with the complicated upload process and picture quality issues, made the experience much less dynamic or streamlined. It captured a snapshot of what students could do at one moment during the course, rather than a series of recordings in which learning points from one annotated video were applied in the next. This more desirable form of longitudinal development might only be achievable once this contradiction between the tools and rules is overcome.
Even if video annotation was seen by some students to facilitate contextualised, objective, and supportive peer feedback, echoing findings from the literature (Cattaneo et al., 2020; Cattaneo and Boldrini, 2017), this was offset by perceptions that it was not especially convenient (cf. Ho et al., 2019). It also lacked immediacy and opportunities for error rectification (Truskowki and VanderMolen, 2017). This co-designed study has helped reveal the causes of these conflicting experiences of video-based formative assessment and suggested how they might be addressed. However, it must also be emphasised that the two systems are not mutually exclusive, and can feasibly be combined within one subject.
Many less confident students who could have benefited most from formative assessment were deterred from participating, not only by the shortcomings of the tools but by rules that required them to perform the procedure in full, self-evaluate their performance and then allow others to scrutinise it; for them, this was still “face-threatening” (Evi-Colombo et al., 2020: 199). Whilst they could have benefited from recording shorter segments of the procedure or at least viewing and discussing peers’ practice, not providing this option meant they were effectively excluded from the activity. Promoting wider peer interaction through analysis of shorter segments could be used to “immerse students in the experience of giving, receiving and interpreting feedback” (Chong, 2021, p. 98). Performing the procedure and analysing peers’ performances, even with mistakes, could build students’ notions of quality without them being forced to engage directly with the gaps between their performances and their developing understanding of the standard. Video benchmark performance demonstrations (in Hawkins et al., 2012) with annotated model reflective comments, a type of scaffolding (Evi-Colombo et al., 2020), might also have helped such students engage more readily in self-reflection. Future co-designed studies could explore these more inclusive, less face-threatening, approaches to video-based formative assessment.
Nevertheless, if the new system is to become embedded, all members of the cohort will need to participate. To achieve this, other rules might need to be applied, such as scheduling class time for students to self- and peer assess, or even assigning participation grades. Cohort-wide parti-cipation would have the benefit of allowing students to remain in the same peer groups they had formed in the historical activity system, and might therefore further increase interaction.
This paper has investigated how implementing video-based formative assessment with a video annotation tool remediates a traditional approach to formative assessment in health HE. In this field, dominated by experimental or quasi-experimental studies, it is the first piece of research to use an activity theory-derived methodology and framework to analyse students’ experiences of this emerging system of technology-enhanced assessment. As such, it contributes to several bodies of knowledge in the literature. First, it suggests that flexible tutor approaches, using an informed blend of tutor-facilitated and student-centred peer assessment, can be more effective than rigid adherence to a single approach. This requires very clear communication of roles and expectations and careful monitoring of participation, which must be part of staff development. Second, despite allowing for supportive, objective and contextualised peer assessment, video-based approaches might be perceived as less immediate, and less convenient, depending on the specific tools. What might be required to address this is some combination of video-based and in-person formative assessment. Lastly, to mitigate affective barriers to participation and build group dynamics, reflection and feedback could be scaffolded using model videos and posts, or these activities might simply be made a grade-bearing course requirement.
Having been actively involved in the research design, there is a greater likelihood that the two tutors will implement video-based formative assessment elsewhere in the curriculum. This will provide opportunities to consolidate some of the lessons learned from this study and overcome contradictions in the remediated system.
It would be useful to study the design and implementation of video-based formative assessment in other health HE disciplines. Future studies could address the limitations of this one by using longitudinal analysis, or more rigorous quantitative analysis of the system data; measuring the perceived impact of formative assessment on clinical performance; or above all, implementing a full Change Laboratory methodology, with tutors and particularly students actively involved in changing their activity system.
This paper draws on research undertaken as part of the Doctoral Programme in E-Research and Technology Enhanced Learning in the Department of Educational Research at Lancaster University. The author would like to thank Dr Brett Bligh for his supportive and detailed feedback on both the draft and final version of the assignment that this paper is based on. He would also like to thank colleagues in the School of Optometry at The Hong Kong Polytechnic University for their involvement in the research.
Dave Gatrell, Educational Development Centre, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong.
Dave is an educational developer and digital learning specialist. He is studying towards a PhD in E-Research and Technology Enhanced Learning. His professional and research interests are in online synchronous and hybrid synchronous learning, massive open online learning, and the use of video annotation in formative assessment for professional learning.
Email: [email protected]
ORCID: 0000-0002-5982-2274
Twitter: @legaladvert
Article type: Full paper, double-blind peer review.
Publication history: Received: 11 August 2021. Revised: 05 March 2022. Accepted: 10 March 2022. Published: 05 December 2022.
Cover image: Badly Disguised Bligh via flickr.
Bligh, B., & Coyle, D. (2013). Re-mediating classroom activity with a non-linear, multi-display presentation tool. Computers & Education, 63, 337-357.
Bligh, B., & Flood, M. (2015). The change laboratory in higher education: Research-intervention using activity theory. In Theory and method in higher education research. Emerald Group Publishing Limited.
Boud, D. (2000) Sustainable assessment: Rethinking assessment for the learning society. Studies in Continuing Education, 22(2), 151-167.
Bowden, T., Rowlands, A., Buckwell, M., & Abbott, S. (2012). Web-based video and feedback in the teaching of cardiopulmonary resuscitation. Nurse Education Today, 32(4), 443-447.
Carless, D. (2006). Differing perceptions in the feedback process. Studies in Higher Education, 31(2), 219-233.
Cattaneo, A. A., & Boldrini, E. (2017). Learning from errors in dual vocational education: Video-enhanced instructional strategies. Journal of Workplace Learning, 29(5), 357-373.
Cattaneo, A. A., Boldrini, E., & Lubinu, F. (2020). “Take a look at this!”. Video annotation as a means to foster evidence-based and reflective external and self-given feedback: A preliminary study in operation room technician training. Nurse Education in Practice, 44, 102770.
Chen, L., Chen, H., Xu, D., Yang, Y., Li, H., & Hua, D. (2019). Peer assessment platform of clinical skills in undergraduate medical education. Journal of International Medical Research, 47(11), 5526-5535.
Chong, S. W. (2021). Reconsidering student feedback literacy from an ecological perspective. Assessment & Evaluation in Higher Education, 46(1), 92-104.
Donkin, R., Askew, E., & Stevenson, H. (2019). Video feedback and e-Learning enhances laboratory skills and engagement in medical laboratory science students. BMC Medical Education, 19(1), 1-12.
Engeström, Y. (1987). Learning by Expanding: an Activity Theoretical Approach to Developmental Research. Helsinki.
Engeström, Y., Rantavuori, J., & Kerosuo, H. (2013). Expansive learning in a library: Actions, cycles and deviations from instructional intentions. Vocations and learning, 6(1), 81-106.
Evi-Colombo, A., Cattaneo, A. & Bétrancourt, M. (2020). Technical and Pedagogical Affordances of Video Annotation: A Literature Review. Journal of Educational Multimedia and Hypermedia, 29(3), 193-226. Waynesville, NC USA: Association for the Advancement of Computing in Education (AACE).
Farquharson, A. L., Cresswell, A. C., Beard, J. D., & Chan, P. (2013). Randomized trial of the effect of video feedback on the acquisition of surgical skills. British Journal of Surgery, 100(11), 1448-1453.
Fukkink, R.G., Trienekens, N. & Kramer, L. J. C. (2011). Video feedback in education and training: Putting learning in the picture. Educational Psychology Review, 23(1), 45-63.
Hammoud, M. M., Morgan, H. K., Edwards, M. E., Lyon, J. A., & White, C. (2012). Is video review of patient encounters an effective tool for medical student learning? A review of the literature. Advances in Medical Education and Practice, 19(3).
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81-112.
Hawkins, S. C., Osborne, A., Schofield, S. J., Pournaras, D. J., & Chester, J. F. (2012). Improving the accuracy of self-assessment of practical clinical skills using video feedback–the importance of including benchmarks. Medical Teacher, 34(4), 279-284.
Ho, K., Yao, C., Lauscher, H. N., Koehler, B. E., Shojania, K., Jamal, S., Collins, D., Kherani, R., Meneilly, G. & Eva, K. (2019). Remote assessment via video evaluation (RAVVE): a pilot study to trial video-enabled peer feedback on clinical performance. BMC Medical Education, 19(1), 1-9.
Hunukumbure, A. D., Smith, S. F., & Das, S. (2017). Holistic feedback approach with video and peer discussion under teacher supervision. BMC Medical Education, 17(1), 1-10.
Lewis, P., Hunt, L., Ramjan, L. M., Daly, M., O'Reilly, R., & Salamonson, Y. (2020). Factors contributing to undergraduate nursing students’ satisfaction with a video assessment of clinical skills. Nurse Education Today, 84, 104244.
Nicol, D.J. & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218.
Parker, H., Farrell, O., Bethune, R., Hodgetts, A., & Mattick, K. (2019). Pharmacist-led, video-stimulated feedback to reduce prescribing errors in doctors-in-training: a mixed methods evaluation. British Journal of Clinical Pharmacology, 85(10), 2405-2413.
Paul, S., Dawson, K. P., Lanphear, J. H., & Cheema, M. Y. (1998). Video recording feedback: a feasible and effective approach to teaching history-taking and physical examination skills in undergraduate paediatric medicine. Medical Education, 32(3), 332-336.
Perron, N. J., Louis-Simonet, M., Cerutti, B., Pfarrwaller, E., Sommer, J., & Nendaz, M. (2016). Feedback in formative OSCEs: comparison between direct observation and video-based formats. Medical education online, 21(1), 32160.
Sadler, D. R. (1998). Formative assessment: revisiting the territory. Assessment in Education, 5(1), 77-84.
Scanlon, E., & Issroff, K. (2005). Activity theory and higher education: Evaluating learning technologies. Journal of Computer Assisted Learning, 21(6), 430-439.
Seif, G. A., Brown, D., & Annan-Coultas, D. (2013). Video-recorded simulated patient interactions: can they help develop clinical and communication skills in today’s learning environment?. Journal of Allied Health, 42(2), 37E-44E.
Spence, A. D., Derbyshire, S., Walsh, I. K., & Murray, J. M. (2016). Does video feedback analysis improve CPR performance in phase 5 medical students?. BMC Medical Education, 16(1), 1-7.
Truskowski, S., & VanderMolen, J. (2017). Outcomes and perceptions of annotated video feedback following psychomotor skill laboratories. Journal of Computer Assisted Learning, 33(2), 97-105
Virkkunen, J. & Newnham, D.S. (2013). The Change Laboratory: A tool for collaborative development of work and education. Rotterdam: Sense Publishers.
Wheatley, L., McInch, A., Fleming, S. & Lord, R. (2015). Feeding back to feed forward: Formative assessment as a platform for effective learning. Kentucky Journal of Higher Education Policy and Practice, 3(2).
Why are you interested in trying this video-based approach?
Do you feel it might solve any problems? How might it benefit students?
What challenges do students currently experience when preparing for their practical exam?
How do students currently develop their clinical optometry practice?
How did this approach to learning develop?
What potential is there for feedback and reflection?
How will the video-based approach change the learning experience? What will it add?
How will it work in practice?
What challenges will students face in reflecting or giving peer feedback?
What challenges will you face?
What were your overall impressions of how students used uRewind for video-based formative assessment?
What worked well?
What did not work so well?
What were your impressions of using uRewind to give feedback?
Did you act differently when using uRewind compared with traditional approaches?
What do you like about uRewind?
What do you think are the current technical constraints of uRewind?
What would you like uRewind to be able to do that does not currently seem possible?
Would you like to use uRewind again? If so, what for? What would you do differently to ensure the project was successful?
Have you ever used video to reflect on your learning? If so, what did you do?
Have you ever used video to give peer feedback on another person’s learning? If so, what did you do?
What challenges are you currently experiencing in preparing for your practical exam?
What do you think you could gain from using uRewind to reflect individually on your clinical optometry practice?
What do you think you could gain from using uRewind to observe and analyse your peers’ clinical optometry practice?
What do you think you could gain from using uRewind to receive peer feedforward on your clinical optometry practice?
What do you like about the uRewind technology?
Can you foresee any problems with using uRewind?
What were your impressions of using uRewind for formative assessment?
How do you feel your experience of preparing for your practical exam changed as a result of using uRewind and video-based formative assessment?
Did anything surprise you about your use of uRewind and video-based formative assessment? If so, what?
What are your current concerns about your use of uRewind and video-based formative assessment?
What would assist you to make uRewind and video-based formative assessment support and enhance your clinical optometry practice?
What do you think are the current technical constraints of uRewind?
What would you like uRewind to be able to do that does not currently seem possible?
Would you like to use uRewind and video-based formative assessment in the future? If so, what for?