Home Bots & Brains Report highlights LLM cybersecurity threats in radiology

Report highlights LLM cybersecurity threats in radiology

by Pieter Werner

A recent special report published in Radiology: Artificial Intelligence, a journal of the Radiological Society of North America (RSNA), outlines the cybersecurity risks posed by large language models (LLMs) in medical imaging, particularly within radiology. The authors emphasize the growing importance of addressing these risks as LLMs become increasingly integrated into healthcare systems.

LLMs, including platforms such as GPT-4 and Gemini, are being utilized across various clinical and research applications. These include decision support, patient data analysis, drug development, and facilitating communication by simplifying complex medical terminology. Despite their potential benefits, the report highlights that these models introduce new cybersecurity vulnerabilities.

The lead author, Dr. Tugba Akinci D’Antonoli, a neuroradiology fellow at the University Hospital Basell in Switzerland, stated that although LLM adoption in healthcare remains in its early stages, the pace of integration necessitates early attention to potential threats.

The report identifies two primary categories of vulnerability: AI-inherent and non-AI-inherent. AI-inherent threats include methods such as data poisoning and inference attacks, which can manipulate model training or exploit weaknesses in output restrictions. Non-AI-inherent vulnerabilities relate to the broader deployment environment, where attackers might access patient data, disrupt services, or manipulate clinical imaging results.

The authors advise that radiologists and healthcare professionals implement comprehensive security protocols before deploying LLMs. Recommended strategies include standard cybersecurity practices such as multi-factor authentication and software patching, as well as institution-specific policies like anonymizing data inputs and using only approved tools.

The report also advocates for regular cybersecurity training for healthcare personnel, analogous to existing mandatory training in areas such as radiation safety.

While emphasizing the need for vigilance, the authors suggest that patients need not be unduly concerned, citing ongoing regulatory developments and investment in cybersecurity infrastructure intended to safeguard personal health information as LLMs are adopted more widely.

Misschien vind je deze berichten ook interessant