Skip to main content
SearchLoginLogin or Signup

AI Regulation: Lawless New Frontier? Or Patchwork of Borrowed Rules and Quasi-Legal Obligations?

Published onAug 26, 2024
AI Regulation: Lawless New Frontier? Or Patchwork of Borrowed Rules and Quasi-Legal Obligations?
·

With Artificial Intelligence (AI) still relatively novel, governing regulations have not quite caught up to the technology. While that may be reasonable enough to accept in an abstract sense, what does that really mean for companies and consumers? Is AI truly the lawless new frontier–the perfect backdrop for profit-hungry AI companies to take advantage of the lapse in regulations? Or are anticipated regulations acting as quasi-legal restrictions imposed on companies? 

Consumers should be relieved to know that the lack of comprehensive AI regulations does not mean that their data is wholly unprotected against exploitation. Unfortunately for companies, achieving and maintaining continued compliance may require a bit more proactivity than previously required. AI is currently governed by a combined patchwork that largely falls into two categories: “borrowed” already-existing rules that still relevantly apply and strongly suggested regulations that are anticipated to soon be in effect.

Primarily, companies must remember that many already-existing standards are still applicable in the AI context, even though not explicitly AI-focused. Specifically, because AI systems are trained on data sets, general data use rules must still be heeded. General data use rules require that data remain compliant throughout the entirety of its use; it is insufficient for it to be permissibly collected, and then freely used for any purpose thereafter. Accordingly, when there is any change in substantive governing rules that, in turn, shifts the legal status of data already in use, companies must take all necessary steps to align data use with all newly enacted standards. The relevant implication is that companies whose current use of consumer data in AI systems later becomes inconsistent with future regulations may find themselves subject to consequences. While it may seem like a departure from the well-settled legal principle that forbids retroactive punishment of completed acts that were once lawful but are now impermissible under newly enacted laws, this apparent anomaly is due to the nature of consumer data used in AI systems.

Because AI systems must be trained on data sets, companies will often train their system on consumer data, to ensure that it is uniquely tailored to their specific needs. In these cases, the consumer data becomes deeply embedded in and inextricably linked with the very function and output of their AI system. Thus, data collection and continued use is not an isolated, completed “act,” but rather becomes baked into all future use of the system. In essence, the future compliance of the entire system itself is determined by the future compliance of the current data on which the system is trained. Even if current collection and use are permissible per current AI regulations (or lack thereof), its later non-compliance due to newly enacted standards could jeopardize the compliance of the system as a whole. Consumer data (and subsequent systems trained on it) that is non-compliant with future regulations will not be “grandfathered in” merely because it was compliant at the time of initial collection and use. This is why companies must evolve beyond a merely responsive approach to maintaining compliance with current standards, and instead look to proactively achieve compliance with the anticipated regulatory trajectory.

While it may seem quasi-legal, at best, for companies to act upon anticipated standards, there need not be much speculating involved, as relevant authorities have provided ample notice of regulatory intent. Domestically, California’s Privacy Protection Agency expects to “set the pace” for regulation of AI systems trained on consumer data. AI regulation is on track to be largely state-driven, and 44 states either already have AI-related bills pending, or have expressed intent to do so promptly. Additionally, the Federal Trade Commission (FTC) has taken a firm stance against companies sacrificing the integrity of consumer privacy in order to satiate “data-hungry” AI systems and is already holding companies to a much higher standard of consumer protection when using consumer data for AI purposes. Although, in the absence of a federal standard, the FTC and other federal agencies must attempt to regulate AI through existing law, the FTC’s recent cases and Notice of Proposed Rulemaking indicate an intended trend towards stricter AI regulation. Internationally, the EU’s General Data Protection Regulation, largely considered the most strict data regulation, already limits the scope of consumer data used in AI, and the EU has additionally taken steps to enact AI-specific regulations

Although many of these standards are not yet enacted and thus not technically binding, general data rules of ongoing compliance, in tandem with the trajectory towards stricter policies, should warn companies that just because they could exploit consumer data for AI purposes, doesn’t mean that they should (for their own sake and that of consumers).

As a final note, companies should remain vigilant in observing the oncoming reverberations of the Supreme Court’s recent decision overturning the long-standing Chevron doctrine. Under the Chevron doctrine, courts deferred to the reasonable regulatory decisions of the appointed federal agencies who possessed the necessary technical expertise to make such a decision. Now, just as AI is “radically chang[ing] the fabric of virtually every sector,” relevant regulations may be subject to more non-expert influence than ever.

Alexis Furgal is a second-year law student at Wake Forest University School of Law. She holds a B.A. in International Studies with a minor in Italian from Virginia Tech.

Reach Alexis here:

LinkedIn: www.linkedin.com/in/alexis-furgal-a049a2262

Email:  [email protected]

Comments
0
comment
No comments here
Why not start the discussion?