LOGIN
AI Regulation

Artificial Intelligence (AI) is capturing the interest of state legislators as they scramble to understand and regulate this rapidly evolving technology. Their initial focus is usually on state government operations before moving on to restrictions in the private sector.

Lawmakers are investigating means of safeguarding their constituents from potential discrimination and other negative impacts, while also ensuring the continual growth and development in various fields such as science, medicine, business, and education.

In an effort to lead by example, the government of Connecticut is the starting point, as stated by Connecticut state Sen. James Maroney during a May floor debate.

By the close of 2023, Connecticut aims to catalogue all AI applications within its government systems, with this information being made publicly accessible. Furthermore, starting next year, these AI systems will be under regular scrutiny to ensure they do not result in illegal discrimination.

Maroney, a leading AI expert in the General Assembly, anticipates that attention will shift towards private industry in the coming year. He will collaborate with legislators from states like Colorado, New York, Virginia, Minnesota, and others this autumn to create a prototype AI legislation. This legislation will have “broad guardrails” and will center around issues like product liability and AI systems’ impact assessments.

Artificial Intelligence is rapidly evolving, and its adoption by people is on a similar fast-paced trajectory. Therefore, it’s crucial to proactively set accountability measures, Maroney expressed in a later interview.

In total, over 25 states, Puerto Rico, and the District of Columbia have proposed AI bills this year. The National Conference of State Legislatures reports that by late July, 14 states along with Puerto Rico had passed resolutions or implemented legislation. This count excludes bills that focus on specific AI technologies like autonomous cars or facial recognition, which the NCSL is tracking separately.

Several states, including Texas, North Dakota, West Virginia, and Puerto Rico, have established advisory bodies to scrutinize and oversee the AI systems employed by their state agencies. Last year, Louisiana founded a technology and cybersecurity committee to assess AI’s influence on state operations, procurement, and policy.

Heather Morton, a legislative analyst at the NCSL who monitors AI, cybersecurity, privacy, and internet issues in state legislatures, stated that lawmakers are curious about who is using AI and how it’s being used. They are gathering data to better comprehend the AI landscape within their respective states.

Following a study by Yale Law School’s Media Freedom and Information Access Clinic that found AI being used for various tasks like welfare benefits allocation, student placement in magnet schools, and bail setting, Connecticut passed a new law. This law mandates regular examination of AI systems utilized by state agencies for any unlawful discrimination. However, the public remains largely uninformed about the details of these algorithms.

AI technology is spreading fast and mostly unchecked throughout Connecticut’s government, a situation that is not unique to Connecticut, according to a group of AI experts.

The EU is currently leading the world in setting up safeguards for AI. Bipartisan AI legislation has been discussed in the US Congress, and President Joe Biden announced in July that his administration secured voluntary commitments from seven American companies to ensure their AI products are safe before release.

While Maroney expressed that ideally, the federal government would lead the way in AI regulation, he recognized that state legislatures can act faster.

Several state-level bills proposed this year have been narrowly tailored to address specific AI-related concerns. For example, proposed legislation in Massachusetts and New York would limit AI use by mental health providers and restrict employer use of AI as an “automated employment decision tool” for job candidate filtering, respectively.

North Dakota passed a bill clarifying the term “person” does not include artificial intelligence. In Arizona, legislation prohibiting AI software in voting machines was vetoed.

State lawmakers need to prepare for an increasingly AI-driven world, says Democratic Sen. Lisa Wellman from Washington. She plans to propose a bill next year that would require students to take computer science to graduate high school.

The legislation efforts were contributed to by Audrey McAvoy from Honolulu, Ed Komenda from Seattle, and Matt O’Brien from Providence, Rhode Island.

Frequently Asked Questions (FAQs) about AI Regulation

What is the focus of state lawmakers in relation to AI?

State lawmakers are focusing on understanding and regulating artificial intelligence technology. They’re initially targeting state government operations before moving on to restrictions in the private sector. Their goal is to safeguard constituents from potential discrimination and other negative impacts without impeding the technology’s advancement in fields such as science, medicine, business, and education.

What is Connecticut’s plan regarding AI technology?

Connecticut aims to catalogue all AI applications within its government systems by the end of 2023, making the information publicly accessible. Starting from the next year, these AI systems will be under regular scrutiny to ensure they do not result in illegal discrimination.

What are some examples of the AI-related bills proposed by different states?

Some state-level bills proposed this year have been narrowly tailored to address specific AI-related concerns. Proposals in Massachusetts would place limitations on mental health providers using AI and prevent workers from not having control over their personal data. A proposal in New York would place restrictions on employers using AI as an “automated employment decision tool” to filter job candidates.

What steps are being taken to understand the use of AI within states?

Several states have established advisory bodies to scrutinize and oversee the AI systems employed by their state agencies. Additionally, lawmakers are gathering data to better comprehend the AI landscape within their respective states. They are interested in who is using AI and how it’s being used.

What is the role of the federal government in AI regulation according to Maroney?

Maroney expressed that ideally, the federal government would lead the way in AI regulation. However, he also recognized that state legislatures can act faster and are often at the forefront of such regulation.

More about AI Regulation

You may also like

5 comments

JanineO August 5, 2023 - 10:22 am

feels like everything’s about AI these days, just hope they don’t stifle innovation in the process…

Reply
Jake_Martin August 5, 2023 - 11:02 am

really hope these AI regulations come in quick. Its amazing but also kinda scary how fast things are changing.

Reply
RachelTechGuru August 5, 2023 - 1:57 pm

It’s important they get this right. AI can bring lots of benefits, but we have to think about the potential harm too. Go lawmakers!

Reply
Liam.P August 5, 2023 - 3:57 pm

So AI’s spreading fast in government systems, huh? Hope they know what they’re doing…

Reply
EthanCoder August 6, 2023 - 1:24 am

Computer science as a graduation requirement? interesting move. might not be a bad idea with all this AI stuff going on.

Reply

Leave a Comment

logo-site-white

BNB – Big Big News is a news portal that offers the latest news from around the world. BNB – Big Big News focuses on providing readers with the most up-to-date information from the U.S. and abroad, covering a wide range of topics, including politics, sports, entertainment, business, health, and more.

Editors' Picks

Latest News

© 2023 BBN – Big Big News