AI and HIPAA: Protect Your Practice

Dave McGill
11-10-2025
Blog

What You Need to Know

In the race to streamline clinical documentation and accelerate claims appeals, patient care clinics are increasingly turning to generative AI tools like ChatGPT and similar platforms. A key benefit these platforms offer is the potential reduction of time spent by clinicians and support staff on claims-related administrative tasks.

 

However, before using a large language model for this purpose, you have to ensure compliance with the Health Insurance Portability and Accountability Act's privacy and security provisions. HIPAA is a federal law that sets national standards to protect patients’ medical records and other individually identifiable health information from unauthorized use or disclosure. It requires healthcare providers, health plans, and their business associates to implement safeguards ensuring the privacy and security of protected health information.

 

HIPAA violations trigger mandatory reporting requirements to both the affected individuals and the U.S. Department of Health and Human Services. Depending on the facts, they can also give rise to civil and criminal liability.

 

What This Means for You

Most popular AI tools operate in the cloud and are not HIPAA compliant. That means if you paste a patient’s protected health information - e.g., name, address, or date of birth -  into a prompt, you are creating an unauthorized disclosure under HIPAA. In order to protect your clinic from this situation and the financial/legal risks that come with it, here are 3 things you should do:

 

  1. Require all employees to receive formal approval from leadership before using AI as a claims-related tool. Make clear that unauthorized use of AI is grounds for discipline.
  2. Explore using AI tools that (a) meet HIPAA security and privacy standards and (b) integrate into your EMR.
  3. If you permit use of web-based AI tools - e.g., ChatGPT, Claude, Gemini, etc. - train your employees so that they understand the use of any PHI in AI prompts constitutes a HIPAA breach. Require them to use prompts that contain no patient-specific information constituting PHI.

 

While the power of large language models in claims-related documentation and appeals is clear, you must use them in ways that do not violate HIPAA and state privacy and security laws. Because these platforms have become so popular so quickly, they generally lack the security safeguards required by these statutes. You therefore need to take every possible step to ensure your staff's compliance with those legal requirements to prevent the threat of civil and criminal liability.