An AI model recently scored high enough on a bar exam to be admitted in most states. AI is booming and it’s fair to say it has become the latest catchphrase with OpenAI’s ChatGPT and other emerging technologies, bringing AI systems into our everyday conversations and lives. In the past few months, generative AI has dominated mainstream consciousness and this technology is moving fast, in fact, OpenAI just announced a new version of its software called GPT-4.
As with most significant technological developments, regulation typically needs to catch up to advancement and adoption. So, while AI becomes more and more popular and lawmakers play catch-up on AI technology, the U.S. Chamber of Commerce recently released a report on generative artificial intelligence, calling on lawmakers to create some regulations around the ballooning technology. Meanwhile, the FTC has recently cautioned marketers not to make unsubstantiated claims about AI-powered products. The FTC has warned companies in the past with some frequency about its AI concerns, particularly issues relating to discrimination. In its most recent guidance, the FTC explicitly focuses on advertising and advising companies to be transparent about how their AI products work and what the technology can do.
According to the agency, companies relying on AI may be subject to FTC enforcement if they’ve exaggerated the claims of their products, overpromised or underdelivered, and not correctly accounted for reasonable risks to consumers. The agency is closely watching how developments with this technology play out and will use its enforcement authority to penalize conduct it views as unfair or deceptive.
In recent years, there has been an increase in the number of products and services that use artificial intelligence-powered technologies. These products and services can perform or enable new tasks or results. Some customers have been hesitant to trust these services with the potential consequences that could come with using these technologies.
However, much evidence suggests that these technologies are coming into play soon. Some service providers have used artificial intelligence algorithms to create products and services that are entirely customizable. This means that end-users can ensure that the product or service best serves their needs and interests.
End-users could use these technologies to personalize their experiences or to create things that are only possible because the end-user has given information to the service provider. This is important because it means the end-user is cared for and can be more hands-on with the product. This can make the end-user more likely to be satisfied with the product.
Finally, some may use AI technologies to make money. For example, an end-user might use AI to prepare product reviews —which may be genuine, fake, or a mix of the two. We can only wait to see what this trend will lead to.
The FTC’s recent warning and increased FTC enforcement action surrounding AI signals a seriousness about regulating this technology. There is an increasing push for Congress to enact regulations around AI as well. Companies operating in this space must understand how their practices could lead to FTC scrutiny and what kind of regulatory hurdles could be coming.