Mohit Goyal MD, FACR, FRCP Edin
Consultant, Division of Rheumatology, CARE Pain & Arthritis Centre, Udaipur, Rajasthan
“In future, there shall be only two types of people – those who have embraced AI and those who are embracing AI.”
Artificial intelligence (AI) is here to stay and while it is unlikely to replace human intelligence anytime soon, those who use it diligently shall decide the rules of the game.
“Artificial intelligence has made significant advancements and has the potential to enhance many aspects of healthcare but it cannot fully replace humans for several reasons.”
Compassion, empathy, doctor-patient rapport are central to healthcare which artificial intelligence inherently lacks. Patient care involves complex and nuanced decision making which requires human judgement. Personalised care plans and handling unforeseen situations are currently beyond the realms of AI. Then there are various ethical issues concerning the use of AI in healthcare that are poorly defined, inadequately understood and largely not accounted for. These concerns are summarized below:
Machine learning (ML) models deploy predictive analytics and require access to large user data. Loss of control over the specific use of the data and unauthorized access remain continual threats that remain inadequately countered. On the other hand, these security and privacy risks have justifiably led firms to withhold data access, limiting the expanse of AI.
AI is algorithm based and any biases in the collected data can influence algorithmic decision making. For example, serious, infrequent drug-related adverse events are more likely to be documented compared to mild, frequent events. An AI decision making tree would base itself on only the documented data (serious events in this case).
This is a feature particularly of large language models (LLMs) where an entirely fictitiously argument is generated without any actual data. This usually happens where the data is lacking, and the output appears so factual and coherent that only professionals may be able to pick it up with continual, dedicated training.
A major barrier to trust toward AI is the lack of explainability of the decision making and the predictive process. Explainability remains critical to uptake of healthcare in humans.
Cybersecurity involves continuous learning and because of AI’s lack of the ability to explain, any breaches to the systems may remained unexplained and hinder learning for future.
Patients are required to provide informed consent for any healthcare measure planned, but the lack of clarity over the decision-making tree in AI may mean patients are inadequately informed.
There is no clarity on the responsibility of the outcomes of AI’s actions. While the authorities may wish to hold the developer or the deployer or the end user accountable, the lack of understanding of the process prevents us from putting the finger on who is precisely responsible.
Efficient allocation of resources and equitable access to care is something humans are still trying to balance. While the AI algorithms can aid efficient hospital bed allocation and utilization and triage patients, various ethical concerns may mean a changed order of priority which AI doesn’t yet account for.
In view of the above challenges, reasonable oversight and thought through regulations are required before AI can be confidently deployed in healthcare.