Logo

Taking trust to the next level in healthcare assessment

When educating health professionals, the role of trust is paramount. Here’s how we can link entrustment with student consensus grading and programmatic assessment

,

21 Oct 2024
copy
0
bookmark plus
  • Top of page
  • Main text
  • More on this topic
image credit: iStock/Liudmila Chernetska.

Created in partnership with

Created in partnership with

The University of Adelaide

You may also like

Top five strategies for integrating active learning into virtual classes
Illustration of students interacting with the web

Popular resources

Assessment can really expose the three-way tension between the developmental needs of the student, the expectations of the real-world industries that will employ them and the academic conventions used to prepare them. Often, during assessment, academic learning designers face several paradoxes: 

  • The higher the stakes attached to a single assessment, the less likely it is that it will contribute to sustainable student learning for the long term
  • The greater the emphasis on credentialling students, the less the priority on the interests of the learner
  • The greater the attention to assessment details, or internal validity, the less relatable it is to the real world or external validity
  • Using expert assessors alone to judge the quality of a student’s work denies the student, as a future professional, the opportunity to practise these important skills while developing.

We consider three of the common responses to these widely reported concerns. 

Entrustment assessment 

The theme of trust has recently become popular in health professions education. 

Recognising the dominance of trust within decision-making about new graduates in real-world settings is of particular interest, as well as considering that trust is a complex phenomenon, often involving multiple intertwined competencies. 

Entrustable Professional Activities (EPAs) are a defined range of activities that professionals need to be trusted to perform. Of course, trust decisions are based on more than what can be measured in traditional academic tests for knowledge or technical skills, so entrustment considers the holistic nature of professional practice and a supervisor’s tacit experience. Levels of trust are accommodated through the use of entrustment scales, often based on the level of student autonomy or the amount of supervision needed.

Prioritising learner development with student-tutor consensus

The student-tutor consensus assessment (STCA) emerged in response to assessor-centric traditions, which position students as passive participants in testing. Instead, the STCA process prioritises students as leaders within assessment. The role of their assessor is essentially reassigned to the calibration of a student’s judgement.

The STCA invites the student to first critique their practice ahead of their assessor. This offers insight into student metacognition and lessens the risk of chance results. As a result, the student is empowered to look beyond simply trying to optimise their test result and, as the score incentives are focused on accurate appraisal ahead of outcomes alone, it breaks away from many academic assessment traditions. Additionally, the STCA encourages students to critically reflect on their performance in real time, developing the skills they require to become reflective practitioners. 

Programmatic assessment

Now a widely accepted standard within medical programmes, Programmatic Assessment for Learning (PAL) operates on several core principles. When decisions are made about students, they should reflect performance trends or results obtained from continuous data points over time, rather than a single test outcome. The focus is on helping the student derive meaning from the results of a test and understand how it can affect future growth, rather than merely focus on the score. PAL is an educational philosophy and an umbrella term that can cover a wide range of assessment innovations.

What we learned from combining these three elements

Our work was originally piloted and evaluated within an undergraduate paramedic programme before the broader utility of the methods was understood, and it is now enjoying critical acclaim in an honours-level physiotherapy degree. The following suggestions reflect our interrogation of the available literature, extensive consultation with professionals across multiple disciplines and necessary adaptations for the local context.

1. Identify a real-world source of truth

Accept that even the most well-constructed assessment design is largely redundant outside the classroom. We looked to our national professional regulation standards as the basis to inform the assessable content for our assessments. By doing this, we united the professional expectations of the discipline and clinical assessors, as well as the ultimate objectives of the student.

2. Less is more with rubrics

Don’t crowd assessment criteria with excessive detail to predict and account for every possible performance outcome. If authentic feedback is limited to what is written in the box, it is more likely to constrain examiners from sharing important feedback.

3. Trust in examiners’ gut feelings and tacit knowledge

Just as a supervisor in the real world might have a gut feeling about a student that affects the level of trust they place in them, assessment should aim to accommodate similar feelings within the process. After all, experts and professionals are brought into the classroom to harness this expertise, so academics should attempt to offer support to help articulate their evaluations. 

4. Move the goalposts

The focus of assessment should remain fixed on the long-term objectives of the event and implications for a student’s future practice as a professional. This requires teachers to repeatedly remind students to look beyond simply trying to pass a test.

5. Making mistakes is part of learning

An expert is not defined as someone who doesn’t ever make mistakes. Students need to be provided with the opportunity to exercise their own critical judgement and evaluation of their own practices.

6. More tests are better than bigger tests

The validity and reliability of results, and the impact on learning, all correlate with increased test intervals, a stark contrast to single high-stakes barrier exams.

7. Improve your assessment culture 

Students are more accepting of results they have co-created. Grade-seeking behaviour, appeals and challenges virtually disappear when the incentive to game results is removed. 

James Thompson is a senior lecturer in health and science practice, and Amanda Maddern is a senior physiotherapist and lecturer, both at the University of Adelaide.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site