The revolution of artificial intelligence in medicine is meant to empower human doctors, not replace them. But before that happens, comprehensive ethical guidelines are needed, researchers say.
The use of AI in medicine has been making headlines in recent years. From detecting tumors in mammograms as effectively as human radiologists to evaluating organs before transplantation, AI has us speculating what the future of medicine will look like.
Despite new programs being constantly developed, there are still major limitations to using AI in medicine. Meanwhile, concerns about patient privacy are mounting, as many of the programs are created by the tech giants, infamous for not protecting their user data.
What will the AI revolution look like?
Mihaela van der Schaar, John Humphrey Plummer Professor of Machine Learning, Artificial Intelligence, and Medicine at the University of Cambridge, is a proponent of what she calls "a reality-centric AI."
For her, it is ensuring the data used is truly representative, fair, and diverse and testing better machine learning models to make them more trustworthy and safe. And most importantly, making sure AI doesn't marginalize and replace humans but rather empower them.
"AI should not replace human intelligence but work well with humans to make them smarter, more able, to educate them. We should make sure that AI is teaching us new skills and information, almost like a personalized coach dedicated to each one of us."- Mihaela van der Schaar
A recent study that included mammograms of over 55,000 women found that replacing one radiologist with AI for independent reading of screening resulted in a 4% higher cancer detection rate compared with radiologist double reading.
However, Schaar says that in medicine, it will always be important to have a human doctor in the room, but the role of a clinician may change. For instance, AI can help identify certain anomalies in radiology, while humans can detect other problems. Therefore, AI and humans should learn to work together.
"This is where I hope we are going rather than replacing humans and only robots taking care of us, which would be quite dehumanizing," she adds.
One of the examples of how AI can empower clinicians is a 2022 study published in Gastroenterology. It found that AI-assisted colonoscopies may cut the risk of missing neoplastic lesions — the leading cause of post-colonoscopy colorectal cancer — by half. Because an estimated one in four lesions are missed during screening, using AI may help to save many lives.
Giovanni Briganti, M.D., Ph.D., Chair of AI & Digital Medicine at Université de Mons in Belgium, also says AI is unlikely to change human doctors.
"People who will use AI to replace doctors have not understood one single thing about what healthcare is and how it is delivered. If a general manager of a healthcare system decides to replace one specific doctor with an AI system, it will be their loss because healthcare is not delivered through a computer system," he says.
Even perfect AI models carry risks
There are several risks associated with the use of AI in medicine, Briganti says. For example, a problem of adoption may arise when a perfect model able to predict and diagnose the disease by 100% is not applied efficiently.
"The second risk is the models that are not perfect and lose efficacy when applied in a new hospital. There is a major challenge that involves testing and validation of clinical AI models."- Giovanni Briganti
Injuries and errors are one of the main concerns regarding using AI in medicine, according to the Brookings Institution report. While doctors also make medical mistakes, patients and providers may react differently when the harm is done by software, not humans. And because AI is becoming so widespread, one problem in a system may result in injuries to thousands of patients.
Schaar says the data AI models are trained on is full of negative biases, increasing the risk that machine learning will propagate them. Therefore, her lab turns biased data into fair synthetic data and uses it for training machine learning models.
"Another way is what we call data-centric machine learning. It is not focused on building the model, such as predictive models, but instead tries to understand data and its diversity," she says.
Who is in charge of ethical guidelines in AI?
Briganti says there is a need for "clear ontological guidelines on how a doctor should behave when using AI."
But who should be responsible for developing them?
Most existing technology relating to machine learning and neural networks is being developed by large tech companies, leading to concerns over privacy. A 2018 survey found that only 11% of Americans were willing to share health data with tech companies, compared to 72% with physicians.
Several lawsuits have already been filed over data-sharing between large health systems and the companies developing AI.
Schaar says that although AI developers are responsible for building a safe, trustworthy, and integrable AI, they are not ethicists. That's why they must work with clinicians to develop ethical guidelines.
For instance, machine learning can be used for organ transplantation, where resources are very limited, and it is necessary to make decisions about who is getting a transplant and who remains on the waiting list with all the consequences.
"We can get machine learning methods, but ethical considerations when making these decisions go beyond the heads of the machine learners," she says.
According to Schaar, ethical guidelines on using AI in medicine should cover its safety. For example, ensuring that it remains safe when applied to a population very different from the one the models were trained on.
She says, "It is also about trustworthiness and interpretability, such that clinicians and patients understand the recommendations being made. It is not only empowering clinicians but also empowering patients."
AI for early diagnosis and prevention
While concerns regarding the risks of using AI are genuine, so are the benefits for the patients. For instance, Google developed a program that can predict the onset of acute kidney injury two days before it occurs, in contrast to usually being noticed only after it happened.
Researchers in the United Kingdom are testing organ quality assessment technology that allows doctors to assess if the organ is healthy enough to be transplanted. This can increase the number of liver and kidney donor organs suitable for transplantation.
But what about AI's role in preventing diseases? Schaar says it could help identify who is at risk. With the increasing use of wearables, healthy people can access various sensors using information on nutrition, the environment, pollution, and other factors.
Schaar says, "Understanding how these factors are changing, at what time and in which subpopulations they are occurring, is a part of understanding how disease starts. And then we can start to think about prevention."
Briganti considers AI essential to personalized medicine, which involves medical decisions and interventions for an individual person.
"AI is the building block because it can help us focus on what makes the patient so unique. It allows making predictions in a very personalized way."- Giovanni Briganti
Relying administrative tasks to AI
Although AI is unlikely to replace doctors, it can make their everyday life easier. ChatGPT, for instance, could take over the administrative part of healthcare.
"Doctors have many administrative tasks that can be automatized thanks to large language models," Briganti says.
Such automatization, in fact, is already happening. Some clinics use AI to transcribe patient appointments directly to their own electronic health record software.
AI could also help flatten the learning curve of medical students and medical doctors learning a specific domain, according to Briganti.
He says, "We can use and leverage large language models and other foundational models to guide their decision-making and improve their knowledge."
- The Lancet Digital Health. Artificial intelligence for breast cancer detection in screening mammography in Sweden: a prospective, population-based, paired-reader, non-inferiority study.
- Brookings Institution. Risks and remedies for artificial intelligence in health care.
- BMC Ethics. Privacy and artificial intelligence: challenges for protecting health information in a new era.
- National Library of Medicine. A clinically applicable approach to continuous prediction of future acute kidney injury.
- Gastroenterology. Impact of Artificial Intelligence on Miss Rate of Colorectal Neoplasia.