Logo

Ten tips when building a centralised evaluation unit

How can we establish and develop evaluation activities to show what support and interventions affect the student experience, as well as their learning, outcomes and destinations?

,

,

University of Bedfordshire
1 Aug 2024
copy
0
bookmark plus
  • Top of page
  • Main text
  • More on this topic
image credit: iStock/Mariia Vitkovska.

You may also like

Tips to make student evaluation fairer for teachers
4 minute read
Balance in student evaluation of teachers SET

Popular resources

Universities increasingly need to demonstrate their practices are based on evidence. They face demands from external regulators to show the impact of educational and other institutional practices on the student experience, learning outcomes and graduate destinations. Many higher education institutions will need to change and refine their evaluation processes and approaches to meet these demands. Here, we outline 10 tips for establishing a centralised evaluation unit, to lead on institutional evaluation and support evaluation activities, within your institution.

Secure senior management buy-in: When creating a centralised evaluation unit, the support of senior leaders is necessary at the outset and, later, for ongoing strategic investment. This is driven by the need for more evidenced-based decision making and practice. In the context of the UK specifically, this relates particularly to the evidence necessary for achieving success within the Teaching Excellence Framework (TEF) and for delivering an Access and Participation Plan (APP), both stemming from central government-led regulatory requirements. 

Establish an evaluation team: Introducing a central evaluation team significantly helps in terms of coordinating education and student experience evaluation activities across an institution. In our experience, a de facto incremental approach is useful to develop, expand and mobilise an evaluation team. For example, we recruited an experienced yet early-career research fellow, but their reach within the role was limited until we created an institutional evaluation leadership position and forged a strategically aligned approach.

Meet ethical standards: It is crucial to set ethical standards that are informed by good sector practices. While we might justify the ethical implications of access to and use of student data for evaluation purposes as a public good, we advocate for the need to secure institutional ethical approval for each evaluation. Beyond reliance on the generic approval secured from students upon registration, this means adhering to the higher ethical requisites of educational research outputs.

Create an expert evaluation panel: Institutional evaluation approaches will benefit from harnessing wider expertise to ensure more rigorous evaluation, research designs and methodologies, and to enrich interpretations of findings. We have found that establishing an expert evaluation panel serves effectively as a critical friend, peer reviewer and quality assurance mechanism when it can meet either scheduled or on an as-needed basis to review proposals or draft reports. Ideally, the panel would be made up of staff – at different career stages – with practitioner, research or data-management expertise. Also consider inviting external members from other universities.

Engage with the sector: Engaging with activities offered by sector-leading evaluation organisations provides crucial opportunities to learn and stay up to date with sector developments and new approaches. For those in the UK, this includes Transforming Access and Student Outcomes in Higher Education, the Higher Education Access Tracker and Evaluation Collective. Engagement serves to develop networks and identify potential future collaborators. It will also strengthen integrated and emergent academic and evaluator practices adopted by an evaluation team.

Plan strategically: It can help to have an evaluation institutional leader positioned to lead the proposing of strategic evaluation planning and activities, delivering information on interventions relating to education and student experience strategic priorities and external reporting. Their involvement as a co-author of a TEF provider submission and an APP ensures consistency across planned institutional evaluation activities.

Establish an evaluation culture: Establish and share standards and guidelines for impact and process evaluations and research designs. Also, the development of theory of change models for all interventions serves to build collective understandings of expected changes and outcomes. Institutions might also consider the potential for offering educational research-funded projects to engage and support a wider range of colleagues in evaluations. In our experience, while such opportunities are enthusiastically received, it is crucial that the institutional evaluation leader oversees this activity where it is contributing to university-level evaluation activities, to enable consistency in research designs and reporting.

Partner with students: Recruiting student researchers into the evaluation team creates a channel to represent student voices. Mindful of the challenges of involving students, given their limited research expertise and issues of confidentially, these might need to be limited to advisory roles on job descriptions. Postgraduate student researchers can add more research experience. Beyond gaining students paid work and offering them an opportunity to develop their experience and employability, this enables the identification of themes that reflect student priorities, sense-checking of tools and enriching analysis of findings. 

Disseminate internally…: Create reporting routes through university governance structures for evaluation and research proposals, timelines, methodologies and findings. This ensures strategic alignment, validation and appropriation by key stakeholders and that evidence informs policy and practice decisions. Institutional learning and teaching conferences can help to facilitate the dissemination of evaluation findings. Such transparency can build confidence in the work of an evaluation team among distinct university communities. 

… and externally: Contributing to the sector in terms of sharing experiences and the identification of good practice and methodologies is achieved through conference presentations, attendance at symposiums and writing journal articles and blogs. Also, creating an external-facing website, dedicated to reports with related recommendations, stimulates valuable internal discussions about what can or should be shared, data protection and institutional reputation. It raises the institutional imperative to implement them. Regarding (co-)authorship, at the beginning it is useful to distinguish the nature of the intended evaluation outputs and agree authorship order. For peer-reviewed journals, research integrity principles should apply. In contrast, for scholarly outputs on practices, such as for conference presentations or blogs, perhaps include staff who might have had some operational role in the implementation of the intervention or the evaluation. This creates developmental opportunities for those with limited or no research experience. 

Establishing a centralised evaluation team and practices must be seen as a journey rather than an event. It is our hope that sharing our reflection on what worked for us will support other aspiring evaluators when developing their own evaluation strategies and associated operational planning. 

Steve Briggs is director of learning and teaching excellence; Diana Pritchard is head of evaluation and enhancement; Sibel Kaya and Kathryn Sidaway are learning and teaching excellence research fellows; Julie Brunton is pro vice-chancellor (education and student experience), all at the University of Bedfordshire.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site