
Medicine in the United Kingdom has a race problem. It is not new, it is not marginal, and it is not explained by differences in ability. For more than two decades, evidence has accumulated showing that doctors from Black, Asian, and minority ethnic backgrounds - including those who trained entirely within the UK system, in the same schools and on the same wards as their white peers - consistently perform worse on postgraduate examinations, progress more slowly through training, and face higher rates of fitness-to-practise referral. The phenomenon has a name: differential attainment. And in 2026, with the Medical Licensing Assessment now fully operational as the universal gateway to UK practice, it sits at the intersection of everything the medical education community needs to reckon with.
For candidates sitting the AKT or preparing for the MLA, this is more than background noise. It is a live debate about what medical assessments actually measure, whether the systems that govern training are structurally equitable, and what a genuinely common standard of medical licensing can mean if the conditions under which different groups reach it are anything but common.
The General Medical Council has published the core data on multiple occasions, and the figures are striking. In postgraduate examinations across all medical specialties, UK-qualified white candidates pass at an average rate of 75%, compared with 62.7% for UK-qualified candidates from Black and minority ethnic backgrounds, and 42.7% for non-European international medical graduates. The odds of a BME doctor failing a postgraduate examination have been found to be up to 2.5 times higher than for a white doctor. At the point of applying for specialty training, 81% of white foundation doctors are successful on their first attempt, compared with 72% of their BME colleagues. These differentials are not limited to one specialty, one examination format, or one institution - they appear consistently across the system.
The most comprehensive examination of this question to date was published in April 2025 by Ricky Ellis, Andy Knapton, and colleagues from the University of Aberdeen and the GMC in BMC Medicine. Analysing over 180,000 examination attempts drawn from the UK Medical Education Database (UKMED), across almost all UK postgraduate medical examinations, Ellis et al. found that even after accounting for prior academic attainment at the point of entry to medical school, being from a minority ethnic background and having a registered disability were the strongest independent predictors of failing both written and clinical examinations. Gender, age, religion, sexual orientation, working less than full time, and socioeconomic background were all additionally significant. The authors concluded that the GMC, Medical Royal Colleges, and postgraduate training organisations now have a responsibility to use these data to guide research and interventions aimed at reducing these gaps.
That framing - gaps to be reduced rather than deficits to be remedied - is not simply diplomatic. It reflects a fundamental shift in how the research community has come to understand what is being measured.
For much of the history of this debate, the instinctive response to differential attainment data has been to locate the problem within the BME trainee. Perhaps performance gaps reflect language difficulties, or differences in prior educational experience, or weaker academic preparation. It is an intuitive assumption, and it is largely wrong.
The most important rebuttal of the deficit model comes from the simple observation that differential attainment is present in UK-qualified BME doctors - individuals who completed their entire education within the British system and whose first language is English. A 2011 systematic review and meta-analysis by Woolf, Potts, and McManus in the BMJ, examining ethnicity and academic performance across UK-trained doctors and medical students, found consistent performance gaps at undergraduate and postgraduate level that could not be explained by the standard proxy variables. Socioeconomic status, prior academic achievement, and language do not account for the differences. As the authors of a 2019 analysis in the British Journal of General Practice observed, it is difficult to explain the gap in UK-qualified BME versus white doctors without considering the role of racism and discrimination - a conclusion that many in the profession remain reluctant to accept.
The awarding gap begins before postgraduate examinations, and arguably before medical school itself. A retrospective cohort study examining UKFP application scores across the graduating cohorts of 2016–2020, published in BMC Medicine, found clear awarding gaps between BAME students and their white peers that were present at entry and persisted throughout undergraduate training. White students scored significantly higher on the Situational Judgement Test, which contributed 50% of the total UKFP application score until 2024 - a finding that has implications for the career trajectories set in motion at graduation. The decision to move away from SJT scores in Foundation Programme allocation from 2024 onwards may mitigate some of this downstream effect, but it does not address the upstream causes.
Quantitative data can establish that a gap exists and quantify its magnitude. Understanding why it exists requires qualitative enquiry, and there is now a substantial body of it - pointing consistently toward structural and relational factors rather than individual deficit.
BME trainees in qualitative studies report being negatively stereotyped by peers and supervisors, feeling compelled to work harder than their white peers simply to be perceived as equally competent, and finding it significantly more difficult to access support from senior clinicians in challenging situations. The experience of isolation is recurrent: a trainee with a problematic training relationship with a supervisor is likely to be more isolated if that trainee is BME, because the combination of a difficult supervisor relationship and separation from social networks outside work compounds the disadvantage. The COGPED and Royal College of General Practitioners differential attainment seminar report of 2018 documented GP trainees describing exactly this dynamic - being very visible when things go wrong, and largely invisible when things go well.
Vaughan and colleagues' work on social capital in medical education, cited across multiple subsequent analyses, demonstrates that high-achieving students were more likely to have at least one tutor or clinician within their social network, while BME and Muslim students were among the least likely to have such connections. Social capital - the informal currency of mentorship, sponsorship, and insider knowledge about how to navigate a system - is unequally distributed in medical education, and that inequality maps onto ethnicity and socioeconomic background in ways that are predictable and persistent.
A 2025 systematic review by Shrestha, Butler, and colleagues, published in Medical Teacher, synthesised qualitative literature on differential attainment in UK postgraduate medical education across 33 studies and identified six recurring themes: language and communication barriers, difficulties adapting to UK clinical practices, cultural and social integration challenges, perceived bias in assessments, direct experiences of discrimination, and work-life balance pressures disproportionately borne by certain groups. The review is notable for its emphasis on the lived experience of training - the texture of daily inequality that examination data cannot capture.
The UK Medical Licensing Assessment, which became fully mandatory for graduating UK medical students from the 2024/25 academic year, was conceived in part as a response to exactly these concerns about consistency and equity. The GMC's ambition was to establish a single, universal standard for all doctors entering UK practice, regardless of whether they trained here or abroad, and to replace the fragmented patchwork of school-level assessments - which varied substantially in volume, format, and rigour - with a standardised threshold grounded in the MLA Content Map.
The argument for standardisation from an equity perspective was straightforward: if different medical schools were assessing their students in different ways and to different standards, then the licence to practise meant something different depending on where you had studied. A genuinely common threshold, applied uniformly, would at least ensure that everyone was being measured against the same yardstick.
The counter-argument is equally straightforward, and considerably harder to dismiss: standardisation does not eliminate differential attainment if the sources of that attainment are structural and relational rather than curricular. If BME candidates perform worse in examinations not because they know less but because they have had worse training experiences, less access to social capital, and more exposure to discrimination throughout their education, then applying a universal examination changes the measurement instrument without changing the conditions that shaped performance. A universal test administered in an unequal system is likely to record inequality with greater precision - not to reduce it.
The parallel with undergraduate assessment is instructive. Research published in BMC Medical Education examining variability in summative assessment across 25 UK A100 medical courses found substantial differences in volume, type, and intensity of assessment, and found that these correlated with postgraduate attainment outcomes. The MLA AKT addresses one dimension of this variability - knowledge assessment at graduation - but the undergraduate experiences feeding into that moment remain as heterogeneous as before.
Several genuinely contested questions follow from this evidence, and they are the kinds of questions that the medical education community has sometimes struggled to address with the urgency the data demand.
The first is whether high-stakes assessments are themselves neutral. The Ellis et al. 2025 BMC Medicine study found differential attainment in clinical examinations as well as written ones - which complicates the intuitive assumption that OSCEs, with their structured marking criteria, are less susceptible to bias than multiple-choice tests. Research examining MRCP(UK) PACES and similar assessments has reached varied conclusions about the role of examiner bias, with some studies finding no systematic evidence of bias at the level of individual examiner-candidate interactions and others pointing to subtler effects of examiner demographics and candidate presentation. The debate is unresolved.
The second question concerns the responsibility of institutions for the training environment rather than just the assessment outcome. The Shrestha et al. systematic review explicitly frames differential attainment as something that "continues to disproportionately affect" IMGs and ethnic minority trainees - a choice of language that acknowledges the structural, ongoing nature of the problem. If training environments are themselves generating differential outcomes through unequal access to support, mentorship, and social networks, then the institutional response should target those environments - not just offer targeted revision support to candidates who are underperforming in examinations.
The third question, perhaps the most uncomfortable, is about the "deficit model" trap. When an institution identifies a group of trainees who are failing at higher rates and responds by offering them additional teaching, coaching, or remediation, it is implicitly framing the problem as located within those trainees rather than within the system. The evidence suggests this framing is wrong. Fyfe and colleagues, writing in Perspectives on Medical Education in 2022, offered a set of "do's, don'ts, and don't knows" for redressing differential attainment related to race and ethnicity in medical schools - and placed avoiding deficit-framing at the centre of effective institutional response.
For anyone currently preparing for the AKT or MLA, this landscape matters in several ways. At the most direct level, understanding differential attainment as a concept - its definition, its evidence base, its contested causes, and its systemic dimensions - is consistent with the professional values and population health strands of the MLA Content Map. Questions about health inequalities, structural determinants of health, and equity in access to care appear regularly in AKT-style assessments, and recognising differential attainment in medical education as itself an equity issue is part of that broader literacy.
At a deeper level, the debate invites reflection on what examinations actually measure. The AKT tests the application of clinical knowledge to scenarios - a format that tends to reward a particular kind of academic fluency that is itself socially distributed. That does not make knowledge assessment illegitimate; the Cipriani principle applies here as much as anywhere else, in that demonstrating knowledge remains necessary even if it is not sufficient. But it does mean that candidates who understand why the MLA was designed the way it was, and what its limitations are, are better placed to engage with the regulatory and educational context of their profession.
The GMC's ongoing programme on tackling differential attainment, available at gmc-uk.org, and the published research from the UKMED represent an institutional commitment to transparency about a problem that medicine has historically been reluctant to name. The Ellis et al. data, the Woolf meta-analysis, the Shrestha systematic review - these are not abstract academic documents. They describe the training experiences of a significant proportion of the doctors who are currently sitting the same examination that AKT candidates are preparing for.
The gap has not closed. Whether the MLA opens or narrows it further is a question that will be answerable only once the data accumulate. What is already answerable, with considerable confidence, is that the gap exists, that it is not explained by ability, and that the profession has a duty to understand it rather than explain it away.
Access 1000+ high-quality MCQs with detailed explanations. Perfect for exam preparation.
Unlimited attempts available • No credit card required
Test your knowledge with free tests. Top scorers appear on the leaderboard!
Join Challenge