Artificial intelligence continues to push cybersecurity into an unprecedented era as it brings benefits and drawbacks by helping both attackers and defenders.
Cybercriminals are using AI to launch more sophisticated and unique attacks at scale. And our cybersecurity team uses the same technology to protect our systems and data.
Dr. Brian Anderson is Chief Physician for Digital Health at MITER, a federally funded non-profit research organization.he is going to give a lecture at HIMSS 2023 Healthcare Cyber āāSecurity Forum He spoke at a panel session titled “Artificial Intelligence: Friend or Foe of Cybersecurity?” Other members of the panel include Eric Liederman of Kaiser Permanente, Benoit Desjardins of UPENN Medical Center and Michelle Ramim of Nova Southeastern University.
We interviewed Anderson to uncover the impact of both offensive and defensive AI and explore the new risks posed by AI. ChatGPT and other types of generated AI.
Q. How specifically does the presence of artificial intelligence raise cybersecurity concerns?
A. There are several ways AI raises serious cybersecurity concerns. For example, nefarious AI tools can pose risks by enabling denial of service attacks as well as brute force attacks against specific targets.
AI tools can also be used for āmodel poisoning,ā an attack that programmatically subverts machine learning models and injects malicious code to produce false results.
Additionally, many of the available free AI tools such as ChatGPT can be tricked into creating malicious code with their rapid engineering approach. Especially in the healthcare sector, there are concerns about protecting sensitive medical data such as protected medical information.
Sharing PHI at the prompt of these publicly available tools may raise data privacy issues. Many healthcare systems struggle with how to secure their systems against allowing this kind of data sharing/leakage.
Q. How does AI benefit hospitals and healthcare systems when it comes to protection from bad actors?
A. AI has been helping cybersecurity professionals identify threats for years. Many AI tools are now used to identify threats and malware, as well as detect malicious code injected into programs and models.
Using these tools will ensure that human cybersecurity professionals are always involved, ensuring proper coordination and decision-making, keeping the health system one step ahead of bad actors. can. AI trained in adversarial tactics is a powerful new set of tools that can help protect healthcare systems from optimized attacks by malicious models.
Generative models such as Large Language Learning Models (LLM) can help protect healthcare systems by identifying and predicting phishing attacks and flagging harmful bots.
Finally, mitigating insider threats such as exposure of PHI and sensitive data (such as used in ChatGPT) are also some examples of emerging risks that healthcare systems need to develop countermeasures for.
Q. What cybersecurity risks does ChatGPT and other types of generative AI pose?
A. Future iterations of ChatGPT and current GPT-4 and other LLMs will be increasingly efficient at creating new code that can be used for malicious purposes. As mentioned earlier, these generative models also pose privacy risks.
Social engineering is another concern. The ability to create verbose texts, scripts, or reproduce familiar voices allows LLMs to impersonate individuals and exploit vulnerabilities.
I have one final thought. As a medical doctor and informatics scholar, it is my sincere belief that the positive potential of AI in medicine far outweighs the negative ones, if the right safeguards are in place.
As with any new technology, it requires a learning curve to identify and understand where vulnerabilities and risks may exist. And in a critical area like healthcare, where patient health and safety depends, it’s critical that these concerns are addressed as soon as possible.
We look forward to bringing together the HIMSS community in Boston, who are passionate about driving medical innovation while ensuring patient safety.
Anderson’s sessionArtificial Intelligence: Friend or Foe of Cybersecurity?ā begins Thursday, September 7 at 11:00 am HIMSS 2023 Healthcare Cyber āāSecurity Forum in Boston.
Follow Bill’s HIT coverage on LinkedIn. Bill Siwicky
Email: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.