Skip to main content
Menu
Doctors and patient in operating theatre

5 takeaways from: ‘Artificial Intelligence: How to Ensure it Benefits Patients?’

Artificial Intelligence

Professor Lionel Tarassenko, Head of the Department of Engineering Science, led a panel discussion titled ’Artificial Intelligence: How to Ensure It Benefits Patients?’, part of the NIHR Open Day at the John Radcliffe Hospital. Taking in contributions from the fields of science, philosophy and industry, it touched on privacy, patient safety, and the future of medicine. Here are five things we learned:

AI won’t replace humans any time soon

“One of the great benefits of AI,” says Chris Holmes, Professor of Biostatistics and Programme Director for Health and Medical Sciences at the Alan Turing Institute, “is that we hope it will give clinicians time. It allows a human expert to give their time where it is most useful.” Algorithms, at least as they currently exist, are set not to replace humans, but to assist them. He points to the example of radiography, which sees clinicians poring over page after page of detailed scans. Huge amounts of time could be saved with the use of an algorithm that directs their attention solely to those scans that show potential abnormalities.

Dr Fred Kemp, of Oxford University Innovation, expanded on this point with the example of genetic testing companies that offer to sequence your DNA for a fee. Their results aren’t fed back directly to the customer, but are interpreted by an expert, who can use the genetic markers to predict, say, their chances of developing heart disease. Without the relevant expertise, the results would be meaningless. In the same way, the results of AI algorithms need to be interpreted for patients – and that will require a trained clinician.

A member of the audience raised a similar question: if a series of patient test results were fed through an algorithm, he asked, how would we mitigate the risk of something being missed? Fred pointed out that, unlike humans, algorithms don’t have off days, don’t get tired and don’t get distracted; in collaboration with an experienced professional, they can be a powerful force for good.

 

Healthcare is not like shoe shopping

Dr Angeliki Kerasidou is a professor of theological philosophy with an interest in the ethics of AI. As she puts it: “We need to look at the future and ask ourselves what kind of healthcare we want.”

She drew the audience’s attention to the predictive algorithms that already exist in our day-to-day lives: “Surveillance seems like too strong a word,” she said, “but that’s what this is.” We accept Amazon’s ‘customers also viewed’ recommendations when purchasing shoes and handbags – but isn’t healthcare a little different?

It’s easy to claim that the privacy-conscious can just opt out, but she says it’s not that simple. We need to access that email server for work, and we tick the box to accept terms and conditions because we need to access the information they guard. Do we want to give up our freedom for the benefits we’re promised? If so, how do we build in an exit for people who decide that the trade-off isn’t worth it?

 

Transparency is key

Sensyne Health Vice-President Dr Nick Scott-Ram provided the panel with a business perspective. A company operating at the forefront of AI in healthcare, Sensyne analyses anonymised NHS patient data to improve care and accelerate the development of new drugs.

With an MA in Natural Sciences and a PhD in the Philosophy of Science, Nick has expertise in both science and ethics. He challenged the audience to consider the question of how patients can be expected to trust an adaptive algorithm, and emphasised the need for clear discussion with the patient about what is happening with their data, alongside the provision of comprehensive risk profiles.

As a publicly traded company, he stresses what a key concern this is for Sensyne; they have a requirement to be highly transparent, not just for their shareholders, but for the patients they serve.

 

It’s already having an impact

Taking place in the Academic Corridor of Oxford’s John Radcliffe Hospital, the panel discussion attracted an audience of medical professionals keen to know how AI could transform patient care in their own fields.

Responding to a question about where else AI could be useful for the NHS, Fred pointed to the example of Navenio, another Oxford spinout. With a product they describe as ‘Uber for Healthcare Teams’, they devised a system to get porters to the right place, at the right time – freeing up overworked nurses and improving patient care.

 

Ethicists and scientists will have to work together

That was the message from Angeliki. Ethicists aren’t aiming to stop scientists from making breakthroughs, she insisted; they aim to make sure that the benefits of AI reach the public, and that society progresses the way we want it to.

Chris pointed to an issue currently facing AI researchers: ‘invariance to ethnicity’, or the need to ensure that an algorithm works for all users, regardless of race or ethnicity. Certain sections of the population are known to have different susceptibility to certain medical conditions. Can we guarantee that an algorithm developed and tested in one location would work equally well in a city with a different ethnic mix?

Angeliki describes these current breakthroughs as our first time using machines that don’t just expand our physical capacity (in the way that, say, a stethoscope does), but that change the way in which we make decisions – a big step for humanity.

 

Find out more about Lionel's research.

Low frequency vibrations expose the interactions of anti-cancer drugs

Nanomaterials