Artificial Intelligence in Medicine: Achieving Breakthroughs in Treatment with Legal Considerations

In the field of medicine, artificial intelligence (AI) is expected to play a vital role in the future. Significant progress has already been made in diagnostic applications, where computers can accurately categorize images to identify pathological changes. However, training AI to assess patients’ dynamic conditions and provide treatment recommendations has proven more challenging. A recent achievement at TU Wien in collaboration with the Medical University of Vienna has addressed this difficulty.

By leveraging extensive data from intensive care units in different hospitals, researchers developed an AI system capable of suggesting treatments for individuals requiring intensive care due to sepsis. Analyses demonstrate that the AI system already surpasses the quality of human decision-making. However, it is crucial to consider the legal implications of implementing such methods.

Optimizing the utilization of available data

Prof. Clemens Heitzinger from the Institute for Analysis and Scientific Computing at TU Wien explains, “In an intensive care unit, a vast amount of data is collected continuously, monitoring patients’ medical conditions. We aimed to explore whether this data could be better utilized than before.” Prof. Heitzinger is also the Co-Director of the cross-faculty “Center for Artificial Intelligence and Machine Learning” (CAIML) at TU Wien.

Medical professionals base their decisions on well-established rules and are well-aware of the parameters necessary for providing optimal care. However, computers can effortlessly consider a much larger set of parameters than humans, which can lead to even better decision-making in some cases.

The computer as a planning agent

“Our project employed a form of machine learning known as reinforcement learning,” says Prof. Heitzinger. “This involves more than simple categorization, such as distinguishing between images showing a tumor and those that do not. It encompasses a time-dependent progression, predicting the likely development of a particular patient. Mathematically, this represents a significant difference, and there has been limited research in this area within the medical field.”

The computer functions as an agent making independent decisions: it receives a “reward” when the patient’s condition improves and is “punished” if deterioration or death occurs. The computer program’s objective is to maximize its virtual “reward” by taking actions, allowing it to automatically determine a strategy with a higher probability of success using extensive medical data.

Surpassing human performance

“Sepsis is a leading cause of death in intensive care medicine, presenting a significant challenge for doctors and hospitals. Early detection and treatment are crucial for patient survival,” explains Prof. Oliver Kimberger from the Medical University of Vienna. “To date, there have been limited medical breakthroughs in this field, underscoring the urgent need for new treatments and approaches. Therefore, exploring the potential of artificial intelligence in improving medical care becomes particularly intriguing. Utilizing machine learning models and other AI technologies offers an opportunity to enhance the diagnosis and treatment of sepsis, ultimately improving patient survival rates.”

The analysis reveals that AI systems already outperform humans: “AI strategies now yield higher cure rates compared to purely human decisions. In one study, the 90-day mortality rate improved by approximately 3%, reaching approximately 88%,” notes Prof. Heitzinger.

Naturally, this doesn’t imply that medical decisions in an intensive care unit should be left solely to computers. However, AI can serve as an additional tool at the bedside, allowing medical staff to consult it and compare its suggestions with their own assessments. Furthermore, such AI systems can be highly beneficial in educational settings.

The importance of discussing legal considerations

“However, this raises crucial questions, particularly of a legal nature,” Prof. Heitzinger emphasizes. “The initial concern may revolve around determining liability for any mistakes made by the AI system. But there is also the opposite dilemma: what if the AI system made the correct decision, but the human chose a different treatment

Share this post

Recent Posts

wpChatIcon