Overview
NHS England processes over 1.2 million patient contacts every 24 hours. Frontline clinicians — GPs, A&E doctors, nurses — make hundreds of consequential decisions each shift, often under significant time pressure and with incomplete information. Missed diagnoses and delayed referrals remain a major challenge across the system.
I led the design of a clinical decision support system that uses AI to surface relevant diagnostic pathways, flag potential risk indicators, and recommend evidence-based next steps — all integrated into the existing clinical workflow rather than sitting alongside it as a separate tool.
The Challenge
Clinical environments are unlike any other design context. Users range from newly qualified junior doctors to consultants with 30 years of experience. Cognitive load is extreme. Interruptions are constant. And every decision can have irreversible consequences for a patient.
Previous attempts at clinical AI tooling in the NHS had failed for predictable reasons: they added steps rather than removing them, they surfaced too many alerts (causing alert fatigue), and they didn't account for the diversity of clinical judgment. Our research found that experienced clinicians were actively bypassing decision support tools because they felt patronising. Any new system would need to respect clinical expertise, not attempt to replace it.
The Approach
The design process was deeply embedded in clinical settings. I spent over 120 hours observing consultations across three NHS trusts — A&E departments, GP surgeries, and acute wards — before a single wireframe was produced. This was essential: the workflow nuances that emerged could not have been captured through interviews alone.
Co-design workshops with clinicians across experience levels produced a set of non-negotiable principles: the system must never block a clinical action, AI suggestions must be immediately dismissable, every flag must include its reasoning, and the visual design must not compete for attention with the patient record.
We built and tested eight prototype iterations over six months, using simulation lab sessions with standardised patient scenarios. Each round surfaced critical interaction failures — particularly around how urgency was communicated and how the system behaved when clinical judgment disagreed with the AI recommendation. The final model treats the AI as a peer offering a second opinion, not an authority issuing instructions.
Outcome
The system launched as a pilot across four NHS trusts and demonstrated a 23% reduction in missed high-risk referrals within the first six months. Crucially, voluntary adoption was high — 84% of clinicians in the pilot cohort chose to use it regularly, compared to 30–40% adoption rates typical of mandated NHS digital tools.
The accessibility framework and alert severity design language developed for this project were subsequently adopted by NHS Digital as recommended standards for AI-assisted clinical tools across the health service. The co-design methodology is now published as an NHS guidance document for future clinical AI deployments.