
GenAI in healthcare has sparked concerns, according to a recent report from Netskope Threat Labs. The frequency with which healthcare professionals upload private patient information to cloud storage and personal AI accounts is alarming.
According to the report, 81% of data policy violations involved regulated healthcare data. This exposes significant security flaws. Netskope researcher Gianpietro Cutolo highlights the need for enterprise-approved, secure solutions.
The Rising Privacy Risks of AI Integration
Healthcare has seen a sharp increase in the use of AI integration. Applications with built-in genAI capabilities are used by 98% of organizations, while 88% use cloud-based genAI apps. However, this growth has increased the unauthorized uploading of protected health information (PHI) to cloud services such as Google Drive and OneDrive.
Although the percentage has decreased from 87% last year, 71% of healthcare workers still use personal genAI accounts. There is still a high risk because many of these AI integrations do not adhere to HIPAA regulations. As the use of GenAI in healthcare grows, misuse of these platforms puts credibility and compliance at risk.
GenAI in Healthcare Faces Growing Access Challenges
According to Netskope, one of the main causes of data leaks is “shadow AI,” or the use of unapproved AI platforms. Even with advancements, the majority of infractions still result from unapproved individual use of GenAI in medical settings. Many healthcare providers are switching to genAI apps that have been approved by their organizations to combat this. It lowers the risk of data exposure by centralizing and monitoring usage.
Thus, 54% of healthcare organizations have implemented data protection measures, such as Data Loss Prevention (DLP) systems. DLP tools manage the type of data that can be shared and limit access to threatened AI apps. DeepAI, Tactiq, and Scite are among the genAI apps that are frequently blocked because of privacy issues.
Ray Canzanese of Netskope claims that financial losses are not the only consequences of data breaches. The need for regulated solutions that adhere to legal requirements is highlighted by the growing use of AI integration without sufficient regulation. Furthermore, as threats evolve, education and awareness will remain crucial to lowering insider threats and human error.
Can GenAI in Healthcare Be Safely Managed?
GenAI will grow in the healthcare industry, but this will increase the need for strong data security measures. Nowadays, businesses are concentrating on implementing AI systems that are safe and effective. Also, the industry is working to take more control over data handling, as evidenced by the 43% of companies testing local genAI infrastructure.
Experts advise looking for malware and phishing threats in all HTTP/HTTPS traffic. Additionally, cloud apps like OneDrive and GitHub are now frequently targeted by attackers. This necessitates a multi-layered security approach that includes browser isolation, app blocking, and ongoing user education.
The integration of all AI applications into secure enterprise frameworks is imperative as the healthcare industry continues to integrate GenAI into its operations. Additionally, HIPAA regulations must be followed by these tools to guarantee that sensitive data is handled appropriately. By taking these precautions, you can lower the chance of data breaches and preserve patient confidence in a technologically advanced future.
Balancing Innovation and Security in GenAI Use
To fully use the power of GenAI in healthcare, providers must adopt enterprise-grade AI platforms. Additionally, they need to implement strong data protection protocols and educate their workforce. Healthcare could be revolutionized by AI integration, but there are serious risks of abuse. Finding a balance between innovation and compliance is necessary to embrace AI’s future while preserving data security.