Complaint Raises Concerns About Fraud Detection System Depriving People of Benefits
Scammers are increasingly targeting vulnerable consumers as well as netizens with their fraudulent schemes and the system is struggling to act in time to block their attempts. In yet another blow to the fight against fraud, the Electronic Privacy Information Center (EPIC) has filed a complaint with the Federal Trade Commission (FTC) against the state-deployed automated fraud detection system, Thomson Reuters's "Fraud Detect." The software, used by government agencies in 42 states, including notable ones like Illinois, Indiana, Iowa, Nevada, and the District of Columbia, is alleged to incorrectly identify fraud and violate federal rules, according to EPIC.
A controversial algorithm
EPIC's complaint argues that Fraud Detect employs an "opaque, proprietary algorithm" fueled by sensitive personal data. The software aims to alert benefits administrators about potentially fraudulent activities related to public benefits, such as unemployment insurance and the Supplemental Nutrition Assistance Program (SNAP). It combines historical public benefits data with the personal information of an applicant to predict fraud and determine the appropriate level of assistance for recipients.
The data points used for fraud predictions include recipients' home addresses, shopping patterns, affiliated persons, social media profiles, and even credit scores, raising concerns about privacy and the potential misuse of personal information.
Unintended consequences
EPIC further contends that the adoption of Thomson Reuters's tool has resulted in millions of legitimate claimants being denied access to public benefits. The system categorizes applicants into five risk levels based on various metrics, such as high-dollar transactions, long-distance travel for shopping, and frequent balance inquiries.
The complaint highlights a specific incident in December 2020, where the California Employment Development Department engaged Pondera, the company behind Fraud Detect, to review 10 million unemployment insurance claims. Using the same algorithm, the system flagged 1.1 million claims as "suspicious," leading to the suspension of benefits for all these claimants. A probe conducted later revealed that little more than half of the flagged claims, approximately 54%, were legitimate.
Compliance concerns
EPIC's complaint extends beyond allegations of inaccurate fraud predictions. It raises questions about compliance with federal standards for responsible automated decision-making systems.
Furthermore, EPIC asserts that Thomson Reuters is in violation of Section 5 of the FTC Act due to engaging in unfair and deceptive trade practices, both directly and indirectly.
Industry response and regulatory trend
This complaint comes on the heels of the FTC's recent decision to prohibit drugstore chain Rite Aid from utilizing AI facial recognition systems, citing a lack of reasonable safeguards. The incident underscores a growing trend where regulatory bodies are scrutinizing the deployment of AI and automated systems, emphasizing the need for responsible and transparent practices in the development and use of such technologies.
As the debate over the ethical and legal implications of automated systems continues, the outcome of the FTC's investigation into Thomson Reuters's Fraud Detect may set important precedents for the use of similar technologies in public benefit programs across the United States. The EPIC complaint raises serious concerns about the accuracy, privacy implications, and compliance of the automated fraud detection system employed by numerous states. The repercussions of this investigation could potentially reshape the landscape of automated systems used in public services, emphasizing the importance of ethical considerations and adherence to regulatory standards.