One in four GPs using AI at work despite vast majority having no training, survey finds
Some 95% of the doctors who said they use generative AI in their work reported no professional training and 85% of them said their employer has not encouraged its use.
Wednesday 26 November 2025 14:35, UK
One in four GPs are now using artificial intelligence (AI) in their work, despite almost none having formal training or guidance from their employers, new research has found.
The largest year-on-year survey of UK GPs on generative AI has revealed an increase in the number of doctors using tools like ChatGPT in their everyday clinical work.
Some 95% of the doctors who said they use generative AI in their work reported no professional training and 85% of them said their employer has not encouraged its use.
Last year, researchers from the same study found one in five GPs was using the technology.
"In just 12 months, generative AI has gone from taboo to tool in British medicine," said Dr Charlotte Blease, from Uppsala University in Sweden and Harvard Medical School.
"Doctors are using these systems because they help - not because anyone told them to. The real risk isn't that GPs are using AI; it's that they're doing it without training or oversight."
Some 35% of the doctors used AI for writing documentation, 27% for differential diagnoses and 24% for treatment or referrals.
Read more from science and tech:
Female stars 'may leave sport because of online abuse'
TikTok says teens' safety not at risk from AI moderation
Teenagers plead not guilty to London transport cyber attack
"This should be a wake-up call," said Dr Blease.
"AI is already being used in everyday medicine. The challenge now is to ensure it's deployed safely, ethically, and openly."
The study's authors, from Uppsala University, Basel University, the Karolinska Institute, the University of Manchester and Harvard Medical School, surveyed 1,005 GPs around the UK.
The findings were published on Tuesday in the Digital Health science journal.
Researchers highlighted the risks of using AI in clinical settings like the technology's tendency to get things wrong, or "hallucinate", and the potential for "algorithmic discrimination" because of potential biases in the models' training data.
The authors also raised concerns about the data privacy of patients.
An NHS England spokesperson said AI is "helping NHS staff spend more time caring for patients - assisting in speeding up diagnosis, reading test results and reducing bureaucracy.
"It's important strong safeguards, clear patient information and proper training for clinicians are in place so that the technology is trusted and delivers real benefits for staff and patients."
A Scottish government spokesperson said it recognises the role AI can play "in the delivery and planning of health and social care services.
"However, it is important we work closely with our NHS workforce to do this to ensure the safe, effective and ethical use of AI for public good."
A spokesperson from Northern Ireland's Department of Health told Paste BN that GPs should "continue to operate within existing governance arrangements and professional standards when considering any use of digital tools, including the use of Artificial Intelligence."
"In Northern Ireland, decisions on clinical support technologies must comply with local policies on patient safety, data protection and accountability.
The spokesperson said the department will "shortly" publish an AI framework to "provide system-wide guiderails for safe, ethical and transparent use of AI".
"The framework sets out principles for governance, oversight and risk management; and will support organisations with guidance on training, evaluation and alignment with regulatory standards - while enabling innovation."