American States Swing Into Action to Regulate AI as it Plays a Bigger Role in Crucial Decisions
Artificial Intelligence (AI) has taken the world by storm in the past couple of years with the rise of ChatGPT and its rivals, as these tools are playing an increasingly decisive role in everyday decisions for businesses. But the pervasive nature of AI in various aspects of life raises concerns about discrimination, oversight, and a lack of regulations. To address this, lawmakers are now proposing legislation to regulate bias in AI.
States navigating uncharted waters
Suresh Venkatasubramanian, a co-author of the White House's Blueprint for an AI Bill of Rights and a professor at Brown University, emphasizes the pervasive influence of AI on people's lives. Despite its widespread integration, many AI systems have been found to exhibit biases, favoring specific races, genders, or income groups. The lack of effective regulation has spurred state-level initiatives to address these concerns.
The success or failure of these legislative efforts hinges on lawmakers navigating complex problems while engaging with an industry valued in the hundreds of billions of dollars and growing exponentially. Out of over 400 AI-related bills under consideration this year, only a fraction passed into law in the previous year.
A closer look
The bills introduced last year and those currently under consideration primarily target specific aspects of AI regulation. Approximately 200 bills are aimed at addressing deepfakes, including proposals to ban explicit deepfakes that plague social media platforms. Additionally, efforts are being made to control chatbots, like ChatGPT, to prevent them from providing instructions for harmful activities.
Experts studying AI's discriminatory tendencies contend that states are already behind in establishing necessary guardrails. The bills specifically focus on AI's role in major decisions through "automated decision tools," which have become prevalent yet remain mostly hidden from public view.
Guarding against bias
Studies estimate that up to 83% of employers, including 99% of Fortune 500 companies, employ algorithms in the hiring process. However, the majority of Americans are unaware of these tools and their potential biases. AI systems can learn bias from historical data, and this issue extends to pivotal decisions, such as hiring and rental applications, potentially leading to discriminatory outcomes.
Several bills target the lack of transparency and accountability in AI decision-making. Bills aim to make it mandatory for companies using automated decision tools to conduct "impact assessments." These assessments would include details on how AI contributes to decisions, the data collected, a discrimination risk analysis, and an explanation of the company's safeguards. Some bills further propose informing customers about AI usage in decision-making, allowing them to opt out with certain conditions.
Craig Albright, senior vice president of U.S. government relations at BSA, the industry lobbying group, notes that industry members generally support certain proposed measures, such as impact assessments.
Challenges on the road ahead
Despite initial legislative efforts, progress has been modest, with some bills already facing challenges. A bill in Washington state has stalled in committee, and a California proposal from 2023 met a similar fate.
As lawmakers and voters grapple with the ever-growing influence of AI, experts emphasize the need for comprehensive and transparent impact assessments to identify and rectify biases. While the legislative proposals mark a positive step, the conversation continues, urging a balance between innovation, oversight, and accountability in the realm of artificial intelligence.