Skip to main content

As is the case with most emerging technologies, developments in artificial intelligence (AI) are quickly outpacing regulations, and with this comes a variety of important legal considerations. While there has been a push for more robust regulatory oversight, regulations on a federal level have yet to materialize. Natasha Allen, Co-Chair for Artificial Intelligence within the firm’s Innovative Technology Sector, looks at the current legal and regulatory issues facing AI startups and what to watch for moving forward.

In terms of the current regulatory landscape, Natasha points to something more like a patchwork quilt than a seamless framework. While states have quickly addressed AI-related risks, the federal government has yet to issue definitive legislation. She zeroes in on two recurring themes that have been persistent with the Biden Administration: responsible AI and transparency/explainability, emphasizing the growing demand for human oversight on AI outputs.

As AI systems become increasingly autonomous, there are concerns about liability and accountability for decisions made by AI models. Natasha emphasizes that the same legal principles apply whether or not you are using AI, and the responsible party will bear some measure of accountability. She stresses the importance of curating appropriate inputs and being mindful of the resulting outputs, which are practices that are moving companies in the right direction. She points to the Nation Institute of Standards and Technology’s (NIST) frameworks as invaluable resources for assessing AI-related risks.

AI startups should also consider structuring their agreements to address issues such as the use of proprietary information and ownership of AI-generated content, to name a few. Natasha notes that many companies are requesting clauses that specifically state when their propriety information is being used to train LLM Models. Companies should be aware of these new requirements and preemptively state how data is being used in their agreements. As it relates to copyright, companies should keep a record of content that is generated organically or with some form of AI assistance versus content produced solely through the use of AI.

Given the rapid advancements in AI and the evolving legal landscape, it is also critical for startups to stay updated on changes in regulations and ensure ongoing compliance with new requirements. Natasha highlights the importance of staying closely connected with your legal counsel to help you understand the changes in the legislation. She also pointed out that Foley has an extensive resource to track AI legislation passed in various states.

When it comes to specific AI issues, Natasha predicts that in 2024 there will be a heightened focus on combating the misuse of deep fake technology in influencing elections. She also anticipates continued efforts by the federal government to establish comprehensive AI regulations alongside other countries finalizing their own AI laws.

There is a delicate balance between AI innovation and regulation, and it will be important that regulatory efforts do not stifle innovation while at the same time providing essential guardrails. For startups developing this technology and the companies implementing it, keeping up with new legislation in the US and legal advancements across the globe will be key.

Author Natasha Allen

More Insights by Natasha Allen