Artificial Intelligence and its Impact on Healthcare – Can the Pros Ever Outweigh the Cons?

AI - or artificial intelligence to give it its full name - is a term that was probably only used by quite a small minority of people until quite recently. However, since the creation and proliferation of Chat.GPT and its imitators, we have all been exposed to some considerable media hype, elaborate hopes for a brave new world, and the inevitable backlash. From the loss of tens of thousands of jobs to the creation of a machine society where mere humans are left behind, however fantastic, the negatives suddenly seem to vastly outweigh whatever positives AI promised.

But, stepping away from the extremes, and moving maybe a little deeper into the world of AI than tabloid journalists typically venture, reveals a more nuanced picture. AI can be a very powerful tool – if managed, understood and applied with human intelligence. Could it still prove to be a powerful force for good in healthcare?

 

Difficult beginnings

AI had a less than auspicious start in healthcare – notably with IBM’s Watson. They spent billions and more than a decade trying revolutionise everything from diagnosis to treatment, and even screening candidates for clinical trials. After series of high-profile setbacks, they are selling up.

 

Biases and blind spots

Aside from accuracy there is the issue of bias. Because of the way data is collected, often AI ends up reflecting the biases and blind spots of the humans who created it. For example, the algorithms used to determine who should get transplants and cancer surgeries in the US have been shown to display racial bias, jeopardising the health of millions of patients. This bias occurs because the algorithm uses health costs as a proxy for health needs and far less money is spent on black patients with the same level of need.

 

Too fast and furious

Then there is the somewhat frenetic, even chaotic way, that AI has been rolled out. For many years now, healthcare systems and hospitals have fumbled the way they adopt AI tools. This is partly due to the breakneck speed at which new tools emerge and are superseded.

 

Recent research

In a new study carried out by Duke University, researchers interviewed 89 professionals involved in AI rollouts at 11 healthcare organizations - including Duke Health, Mayo Clinic, and Kaiser Permanente. The result was a practical framework to follow when rolling out AI tools.

 

A framework for progress

This eight-point framework is based on clear, explicit decisions that can be made by executives, IT leaders, or frontline practitioners.

The process involves:

  1. Identifying and prioritizing a problem
  2. Looking at how AI could help
  3. Developing a way to assess the outcomes and successes of any AI
  4. Figuring out how to integrate it into existing workflows
  5. Validating the safety, efficacy, and equity of AI in the health care system before clinical use
  6. Rolling out the AI tool through communication, training, and trust building
  7. Monitoring
  8. Updating or decommissioning the tool as required

 

How can AI help?

Clearly identifying problems that AI could help with is essential. Many AI solutions appear to do the same thing as a medical practitioner, but they don’t necessarily do the entire task. It may be that an AI can scan and sort images, but that doesn’t mean that can replaced the experienced radiologist, merely reduce the repetitive tasks, releasing them to focus on more appropriate activities.

 

Better measurement

Assessing the effectiveness of an AI tool and whether it's even appropriate for a given problem is difficult too. Few AI applications have obvious outcomes that are tangible or measurable. It is essential that they have measurable outcomes, especially in terms of its performance across different race and ethnic groups.

Measuring and monitoring outcomes complex and demands new levels of scrutiny. Health systems are notoriously sketchy when it comes to looking at how well they actually work in individual patient cases. Data gathering has improved, but sometimes larger data sets hide vital variations that indicate important details. Monitoring outcomes across large groups does not often give the full picture.

 

Fitting in

Simply getting the algorithm right is just the start. Getting the AI to work for clinicians is another important element. Even simple AI tools, such as AI-based methods to autocomplete triage notes in emergency departments, have been implemented but then practitioners have not been fully trained, or cannot see the benefits. Even if an algorithm is right and potentially useful, if it doesn’t fit into established workflow it can be misused or simply ignored.

 

Trust is everything

AI should be carefully built into a clinician's workflow, with trust and understanding. Trust is essential because if an AI system is always right, users tend to stop checking it. Then they begin to rely on it even in the face of their own instincts. And yet when AI makes obvious mistakes, doctors aren’t going to use it at all.

 

It takes time

Not all practitioners have the time to learn how to use these new tools as the rate of introduction, and innovation is so swift – especially when they're barely keeping up with the work they already have. AI can save time, but only if people have the time to invest in using it properly and be skilled enough to use it to its full potential.

 

In conclusion...

To truly harness the potential of AI in healthcare, health systems may need to create new ways to interact with or monitor the system, new communication strategies to maintain professional boundaries, and new expertise. Practitioners will need the time to fully understand the benefits and limitations of any AI system and be able to feedback their experiences and concerns in a way that helps steer and refine future applications.

More News