States becoming venue for AI regulation in the US

24 February 2023 23:41 by Jenn Brice

AI regulation

Five new US state privacy laws go into effect this year, with more taking form across the country, and rules about how artificial intelligence handles personal information are shaping up to be a similarly busy playing field.

California, Colorado, Connecticut, and Virginia privacy laws give consumers the right to opt out of automated processing of their personal information by AI. Utah, typically considered less prescriptive, has the only one of the five privacy laws taking effect this year that does not.

State privacy laws taking effect this year include the right to not let increasingly sophisticated AI and machine learning technologies use a consumer’s personal information to make decisions about them, but companies that develop or deploy these technologies are still waiting to see how the rules will play out.

The laws

The patchwork nature of US privacy laws causing businesses compliance concern is evidenced in their array of provisions on AI. These general privacy laws are giving users more transparency and choice over how machine learning weighs their personal data in critical cases such as healthcare, education, housing, or employment.

The Connecticut and Virginia statutes give consumers the right to opt out of having their personal information profiled in automated decisions “that produce legal or similarly significant effects concerning the consumer.” Virginia also tells businesses to do data protection assessments when processing personal data for profiling.

Profiling is the process of using AI on consumers' data to predict other information such as their economic situation, health, personal preferences, interests, reliability, behavior, location or movements.

Colorado likewise lets consumers opt out of profiling in decisions that are solely automated. But Colorado adds more nuance, distinguishing between “human involved” and “human reviewed” automated processing.

Colorado consumers can also opt out of human reviewed automated processing, but businesses do not have to act on opt-out requests for human involved automated processing.

To be considered human involved, a human takes “meaningful consideration” of the system’s data use and output, and has the “authority to change or influence” the outcome. Having a human review the system "with no meaningful consideration" of its output does not amount to human involved, the Colorado rules propose.

The Colorado attorney general’s office heads privacy rulemaking under the Colorado Privacy Act and released a third draft of proposed rules in January.

The California Privacy Rights Act, the 2020 ballot initiative to update the 2018 California Consumer Privacy Act, tells businesses to give consumers the reason behind the automation, then give them the right to opt out of it.

The California Privacy Protection Agency that gets rulemaking authority from the CPRA is gearing up for a new rules package that will address automated decision-making systems. The agency started gathering public opinion on the topic this month.

Each of the provisions take inspiration from the EU’s General Data Protection Regulation, which says a “data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”

The Virginia statute went into effect in January, and Connecticut’s will in July. Businesses are watching California and Colorado closely, the more prescriptive states that have yet to finalize their rules on automated decision-making.

“There's a lot of regulation, existing and proposed, that could affect AI companies, both the companies involved in developing AI products and the companies involved in using AI products,” said Maneesha Mithal, a privacy and cybersecurity lawyer. “It is a lot to parse through existing legislation that may apply to AI, AI specific legislation at the federal level that's been proposed, and at the state level that's been proposed. It's a real challenge.”

Mithal said that there are still open questions that require final regulations and additional guidance, sch as how exactly a company must accommodate consumers who opt out, and what the alternatives for those consumers are.

New rules

Other emerging laws governing AI deal with how the technology processes personal information in particular use cases.

Recent federal focus on increasingly sophisticated applications of AI — such as the White House’s AI Bill of Rights and the National Institute of Standards and Technology’s AI Risk Management Framework — offer principles to inform innovation.

But regulators at the state level have more enforceable approaches in mind. California, Colorado, Connecticut and Virginia's privacy laws also address artificial intelligence, and a number of states and cities are introducing or enacting bills specific to artificial intelligence.

Illinois and Maryland laws regulating the use of facial recognition technologies in hiring took effect in 2020. A New York City law prohibiting the use of automated employment decision tools will be enforceable in April. A Colorado law to stop discriminatory processing of consumer data specifically in insurance practices took effect this January.

The District of Columbia is exploring broad action on algorithmic discrimination. The Stop Discrimination by Algorithms Act was introduced in 2021 by then-DC Attorney General Karl Racine. DC Council Committee Chair Robert White said he is committed to moving the bill forward this year.

The bill would prohibit DC businesses from making a decision from an algorithm on the basis of personal characteristics such as actual or perceived race, religion, national origin, sex, or disability if the decision keeps “important life opportunities” from the consumer. It would also give consumers a private right of action.

At the national level, the Federal Trade Commission has long enforced violations involving automated decision-making tools under the Fair Credit Reporting Act of 1970 and the Equal Credit Opportunity Act of 1974. But since automated decision-making has become synonymous with newer AI and machine-learning technologies, consumer advocates are increasingly wary of algorithmic harm and bias.

When the FTC nodded to this broader issue of algorithmic harms in its advanced notice of possible rulemaking on commercial surveillance and data security, companies argued that a rule governing AI would be overstepping the agency's authority under Section 5 of the FTC Act.

Most of the FTC's critics said that sort of “AI rule” would have to come from Congress. Representative Ted Lieu, a Democrat from California, introduced a resolution urging lawmakers to work on legislation to “ensure that the benefits of AI are widely distributed and that the risks are minimized.” But such a federal law to put an end to the patchwork would of course require bipartisan support.

Related Articles

No results found