Illinois lawmakers have introduced legislation aimed at regulating the use of artificial intelligence in health insurance, a response to a disturbing trend identified in a recent U.S. Senate report.
House Bill 0035, spearheaded by Rep. Bob Morgan, D-Highland Park, was referred to the Rules Committee on January 9, 2025. Known as the Artificial Intelligence Systems Use in Health Insurance Act, the bill seeks to curtail the potential misuse of AI systems that could unfairly deny insurance claims.
The push for regulatory oversight follows an October report by the U.S. Senate Permanent Subcommittee on Investigations, which revealed significant issues with the deployment of AI tools by national insurers. According to the report, UnitedHealthcare’s denial rate for post-acute care under Medicare Advantage plans surged from 10.9% in 2020 to 22.7% in 2022. This increase coincided with the insurer’s adoption of nH Predict, an AI model developed by naviHealth, a subsidiary of UnitedHealth Group.
nH Predict and similar algorithms analyze extensive data sets to predict healthcare needs, comparing individual patients to others with similar profiles. However, a JAMA Network article warns that the purported accuracy of these AI models might be overstated, potentially leading to erroneous healthcare denials.
The legislative initiative in Illinois comes as both UnitedHealth, Cigna and Humana face lawsuits alleging misuse of the nH Predict model. According to court documents, insurers are accused of pressuring case managers to adhere to the algorithm’s recommendations for patient care durations, despite objections from medical professionals and patients’ families.
One significant lawsuit claims that 90% of UnitedHealth’s AI-generated decisions are overturned upon appeal, highlighting a critical flaw in the reliance on automated systems for crucial healthcare decisions.
The Illinois bill, if passed, will require that all AI-driven adverse decisions undergo a rigorous review by a human overseer with the authority to reverse unjust or erroneous AI conclusions, ensuring that AI advancements serve to support, rather than undermine, patient care.