California Governor Newsom signed Senate Invoice 1120 into legislation, which is called the Physicians Make Choices Act. At a excessive degree, the Act goals to safeguard affected person entry to therapies by mandating a sure degree of well being care supplier oversight when payors use AI to evaluate the medical necessity of requested medical companies, and by extension, protection for such medical companies.
Typically, well being plans use a course of often known as utilization administration, pursuant to which plans evaluation requests for companies (also referred to as prior authorization requests) in an effort to restrict utilization of insurance coverage advantages to companies that are medically essential and to keep away from prices for pointless therapies. More and more, well being plans are counting on AI to streamline inside operations, together with to automate evaluation of prior authorization requests. Specifically, AI has demonstrated some promise of lowering prices in addition to in addressing lag instances in responding to prior authorization requests. Regardless of such promise, use of AI has additionally raised challenges, akin to considerations about AI producing outcomes that are inaccurate, biased, or which finally end in wrongful denials of claims. Many of those considerations are based mostly on questions of oversight, and that’s exactly what the Act goals to handle.
As a place to begin, the Act applies to well being care service plans and entities with which plans contract for companies that embody utilization evaluation or utilization administration capabilities (“Regulated Events”). For functions of the Act, a “well being care service plan” contains well being plans that are licensed by the California Division of Managed Well being Care (“DMHC”). Considerably, the Act incorporates various particular necessities that are relevant to the usage of an AI instrument that has utilization evaluation or utilization administration capabilities by Regulated Events, together with most importantly:
- The AI instrument should base selections as to medical necessity on:
- The enrollee’s medical or different scientific historical past;
- The enrollee’s scientific circumstances, as offered by the requesting supplier; and
- Different related scientific data contained within the enrollee’s medical or different scientific report.
- The AI instrument can not decide solely based mostly on a bunch dataset.
- The AI instrument can not “supplant well being care supplier choice making”..
- The AI instrument could not discriminate, instantly or not directly, in opposition to enrollees in a way which violates federal or state legislation.
- The AI instrument should be pretty and equitably utilized.
- The AI instrument, together with particularly its algorithm, should be open to inspection for audit or compliance by the DMHC.
- Outcomes derived from use of an AI instrument should be periodically reviewed and assessed to make sure compliance with the Act in addition to to make sure accuracy and reliability.
- The AI instrument should restrict its use of affected person knowledge to be in line with California’s Confidentiality of Medical Info Act in addition to HIPAA.
- The AI instrument can not instantly or not directly trigger hurt to enrollees.
Additional, a Regulated Celebration should embody disclosures pertaining to the use and oversight of the AI in its written insurance policies and procedures that set up the method by which it evaluations and approves, modifies, delays, or denies, based mostly in entire or partially on medical necessity, requests by suppliers of well being care companies for plan enrollees.
Most importantly, the Act supplies {that a} dedication of medical necessity should be made solely by a licensed doctor or a licensed well being care skilled who’s competent to guage the precise scientific points concerned within the well being care companies requested by the supplier. In different phrases, the buck stops with the supplier, and AI can not substitute the supplier’s position.
The Act is probably going simply the tip of the spear when it comes to AI-related regulation which is able to develop within the healthcare area. That is significantly true as use of AI can have large real-life penalties. For instance, if an AI instrument causes incorrect ends in utilization administration actions which end in inappropriate denials of advantages, sufferers could not have entry to protection for medically essential companies and should undergo adversarial well being penalties. Equally, disputes between well being plans and suppliers can come up the place suppliers consider that well being plans have inappropriately denied protection for claims, which might be significantly problematic the place an AI instrument has adopted a sample of decision-making which impacted a bigger variety of claims. All the foregoing might have large impacts on sufferers, suppliers, and well being plans.
We encourage Regulated Events to take steps to make sure compliance with the Act. Regulated Events with questions or searching for counsel can contact any member of our Healthcare Group for help.
Additionally, think about registering for our upcoming webinar, Methods to Construct an Efficient AI Governance Program: Concerns for Group Well being Plans and Well being Insurance coverage Issuers, on November 13, 2024.