Generation generated: Artificial intelligence in health care

Dec 11, 2024
Share this:
Generated image of doctors using advanced tech.

How is artificial intelligence shaping the future of education, health care and the human experience?

In 2023, generative artificial intelligence (AI) had its breakout year as we delved into the realm of AI-generated art, copywriting and chatbots. These captivating and practical tools are just the tip of the iceberg in a world where emerging technology continually finds its way into our daily lives. While AI rapidly expands its influence across various domains, its impact on the field of medicine tantalizes us with the prospect of greater precision, faster decision-making – and perhaps less paperwork. However, the use of AI also sparks concerns regarding bias, accountability and the potential obsolescence of human doctors. Is AI in medicine a cause for alarm or simply the next addition to our technological toolkit?

Inside the Algorithm

Ryan Pferdehirt, PhD, adjunct professor of Bioethics at KCU and vice president of ethics services at the Center for Practical Bioethics, said that one key reason to be apprehensive of AI-generated data is the lack of understanding behind how it makes predictions or decisions.

 

"The problem with medical AI right now is the black box problem – we know sample sets, they go into [the AI], and then there’s an algorithm and out comes a result,” said Pferdehirt. “And the algorithm is so complicated that the people who created it don’t even know what it is. By the end [of the algorithm's process], it’s producing results that are good, but if you asked it how it’s coming to that decision, nobody really knows.” This lack of transparency in understanding how AI comes to its decisions leads to questions about the trustworthiness of results and who, if errors occur, should be held accountable.

 

Despite reservations about its reliability, AI tools offer numerous advantages to health care professionals, according to Cindy Schmidt, PhD, MBA, director of scholarly activity and faculty development for KCU. “1.2 million people are harmed every year by medical error. And as we discover how to harness this new technology, I think we’re going to see a significant reduction in that rate of medical error,” she said. “AI presents a compelling opportunity to assist us in areas where sustaining attention can be challenging.”

 

AI models also exhibit unparalleled efficiency in ways that surpass the capabilities of human doctors. If AI technology can be implemented in a manner that empowers current systems to better analyze real-time data to reduce human error while also increasing efficiency, then perhaps AI can be used to help detect diseases faster by providing a faster reading of complicated radiological images such as CT scans and MRIs, resulting in earlier diagnoses.

 

While the average physician draws from their experience with thousands of patients who often may be from relatively homogenous demographics, AI possesses the capacity to rapidly analyze data from millions to tens of millions diverse patient populations.

 

What about the fear that AI will take over the role of the human physician? Schmidt reassures us that AI will not replace the human physician but instead become a valuable partner. “This rapidly developing efficient technology will not lead to an evolution of robot doctors, but rather enable physicians to dedicate more time to their patients, ensuring the highest standard of compassionate care. While the AI tool is doing the more laborious work, you’re freed up to be more creative, imaginative, to use what you know about context and social relationships and intuition," remarked Schmidt.

 

While AI promises to save time and enhance decision-making accuracy, health care professionals, including physicians, must consider the downsides of integrating these tools into medicine. One crucial aspect is recognizing potential biases embedded within these systems and how they could grow exponentially.

 

Pferdehirt points out that “everything made by mankind... is made by mankind. And it has our biases put into it right from the very beginning. If we have a one in 10,000 bias, through the process of machine learning, the one in 10,000 explodes. It turns a little bias in a huge bias when you repeat the process millions of times.”

 

For example, a case involving a cancerous lesion revealed that an AI model, which drew from predominantly Caucasian data, inaccurately assessed a lesion in a Black patient and ultimately led to a tragic outcome.

 

Another societal consideration as we embrace AI tools is accountability. The technology industry has traditionally fostered rapid innovation by operating on the principle of acting fast and fixing problems later. However, with AI now involved in crucial decision-making processes, especially in health care, the stakes are higher, and mistakes can have life-or-death consequences.

 

Addressing the blind spots and biases of AI will require vigilance from the health care professionals who employ these technologies. “AI can hallucinate – it can just make something up, so we have to check it and be responsible for it. We are responsible for our AI partners and so we have to check its work, because it’s our work. We have to be responsible because it will generate errors,” said Schmidt.

 

Pferdehirt echoed this sentiment with a quote from a 1979 IBM presentation; "A computer can never be held accountable; therefore, a computer must never make a... decision." He goes on to raise the question of responsibility in cases of malpractice. “If you’re going to file a suit, who are you going to sue? The algorithm? Can’t sue a machine. The doctor, the hospital? The company that created the algorithm that put out the bad information that led to death?” Because risk is spread across many people, nobody owns the decision and nobody can be responsible for the decision. And until someone is given legal responsibility, nothing will change.

 

Powering Tomorrow’s Doctors

The rapid development of AI and questions surrounding its validity and safety within the realm of health care underscores the urgency of ensuring our future medical professionals are fully informed.

 

“Students need to learn what machine learning is; they need to understand what algorithmic learning is. Understand what they are, how it is working, not just what it does,” says Pferdehirt. “The more we understand how it works…the more we will understand its limitations. If we don’t understand how it works, there is more potential for blind trust in a machine.” 

 

Today, health sciences schools across the nation are introducing their students to AI, helping them recognize that the future of medicine will belong to those who understand and harness its potential.

 

Student Doctor Justin Mitchell, OMS III, KCU Student Government Association president for the Kansas City campus, recognizes the importance of AI in health care, leading him to attend the American Osteopathic Association’s 2023 Annual House of Delegates meeting in Chicago, where AI in health care was a central topic of discussion. During the meeting, in a groundbreaking partnership with AI tool ChatGPT, the Missouri Association of Osteopathic Physicians and Surgeons initiated conversations about the impact of AI on osteopathic medicine, culminating in the proposal for the formation of a dedicated task force comprised of experts to investigate its effects. Mitchell’s key takeaway from this experience was the potential for an efficient and life-organizing tool. He shared his excitement, saying, "I am genuinely excited to see how [AI] can streamline your life, benefiting students, attending physicians and future residents alike."

 

Today our world stands on the precipice of AI’s transformative influence on the health care industry and practices. The road ahead will demand thoughtful navigation and consideration, such as:

 

  • How will legislation hold the technology industry accountable? As AI increasingly influences critical medical decisions, will regulatory guidelines ensure transparency, fairness and accountability in the development and deployment of these technologies?

 

  • How does the health care industry maintain that it is not solely driven by data, but that it is a healing practice? How can AI enhance the humanity in patient care while helping health care professionals improve outcomes?

 

  • How will AI’s potential biases be harnessed, and who will be responsible for maintaining vigilance in ensuring the accuracy and accountability of AI tools?

 

  • How health science schools provide education becomes paramount in this journey of comprehensive training to understanding the intricacies of AI, its algorithms and its limitations. Knowledge will empower health care professionals to make informed decisions and reduce harmful outcomes.

 

Though AI was born nearly seven decades ago, its partnership with health care is just beginning, and collaboration and responsible implementation will shape the outcome. This technology is neither a cause for alarm nor a simple addition to our toolkit. It is a dynamic force that requires careful consideration, education and ethical stewardship. By embracing the potential while addressing the challenges, health care professionals can, hand-in-hand with their AI partner, embark on a journey toward a brighter, healthier future.

 

Share this:
Category(s):


(0) Comments