As humans, we have a long history of interacting with objects that blur the line between natural and artificial. Haven’t we all heard of stories of a mound of clay turning into some creature or a puppet coming to life? So, is it possible that we may be conditioned to believe advertisements for shiny new gadgets that purport to be “powered by AI”?
However, the FTC disapproves of this term. The agency writes, “One thing is for sure: it’s a marketing term.” FTC officials also go on to write, “one thing we know about hot marketing terms is that some advertisers won’t be able to stop themselves from overusing and abusing them.”
The FTC has investigated numerous companies in the AI (artificial intelligence) and automated decision-making space and brought numerous cases asserting law violations while enforcing AI-related regulations. The FTCs regulations and law enforcement guidelines demand that any application of AI should involve responsible, fair and explainable practices and display high levels of accountability. These laws aim to provide important guidance to businesses to better manage consumer risks of AI applications or algorithms.
Let’s get to the depth of the matter and understand what FTC advises when it comes to AI claims from any organization.
The FTCs Guidelines on AI Applications
Very recently, on February 27, 2023, the US Federal Trade Commission (FTC) published a set of guidelines from their Division of Advertising Practices on advertising claims for AI applications. The latest FTC guidance emphasizes that AI tools must also “work as advertised,” whereas earlier posts focused on avoiding automated tools that tend to have biased or discriminatory outcomes.
Just a few days earlier, on Feb 18th, Sam Altman, the CEO of ChatGPT and creator of OpenAI, tweeted that future regulation of AI is “critical” until the technology can be adequately understood. He said people would need time to adjust to “something so big” as AI.
Again, on March 12, 2023, Forbes made a statement that “The Federal Trade Commission aims to bring down the hammer on those outsized unfounded claims about generative AI, ChatGPT and other AI, warns AI ethics and AI law.”
All this just means that the FTC guidance this year was in response to the sudden spike in AI research and development and the burgeoning markets for generative AI products such as ChatGPT, DALL-E, Uberduck AI, Stable Diffusion, MidJourney and more.
Also Read: Generative AI & Its Massive Role in Redefining Creative Processes
How to Ensure You Meet the FTC Guidelines?
The AI fever has caught on every possible product out there — from toy cars to chatbots and everything in between. But the reality is most of these products with tall AI claims may not even work as advertised, to begin with. Although the products may not really cause any big harm, their effectiveness is often questionable.
So before you make AI claims for your products, FTC advises you to consider the following questions.
1. Are you overvaluing your AI product’s capabilities?
Are you claiming that your product is capable of delivering over and beyond the existing capabilities of similar AI products or technology? For example, you should know that predicting human behavior accurately is still beyond the scope of machines. So it would be deceptive to claim that your product can make trustworthy predictions unless it’s backed by scientific evidence or if they apply conditionally, only to certain groups of users or in regulated environments.
2. Are you making claims that your AI product will outperform conventional alternatives?
FTC warns that any claims that place the capabilities of an AI product as superior over a non-AI product of similar functionality must be substantiated with sufficient data. For instance, you might have to reveal comparative performance scores to prove superior efficiency. If, for some reason, you are unable to provide such testing data, you should refrain from claiming such superiority.
3. Are you cognizant of the associated risks?
Although the phrase “reasonably foreseeable risks and impact” may seem vague, your legal team can explain why you should not stretch the meaning in any way. This means you should know the likely consequences and risks of releasing your AI product to the public. You can’t place all the blame on the tech developer if it fails or produces biased results. And you cannot deny responsibility stating that you are unaware of the technology or how to test it.
4. How much “AI” does the product actually use?
The FTC advisory recommends avoiding making “baseless claims that your product is AI-enabled” or “AI-powered.” Given the FTC’s admission that the meaning of “artificial intelligence” is ambiguous, it is difficult to ascertain the kind of evidences the FTC will consider appropriate for such claims. However, the advisory emphasizes that “merely using an AI tool in the development cycle is not the same as a product that has AI in it.” It also implies that products can be reasonably categorized as AI products if their fundamental features or functions “use computation to accomplish duties such as predictions, decisions, or recommendations.”
In Conclusion
This is not the first time the FTC has issued such guidance, advising businesses to keep their AI-related practices in line with well-established FTC consumer protection principles. This includes being honest and fair when utilizing AI.
Normally, FTC investigations take place as a result of new staff guidance. So, marketers should be very careful with their claims and ensure they are not overstating the capabilities of what their AI algorithms can do.
Opporture understands how important it is to keep AI claims in check so businesses can use this technology ethically and responsibly. With the leading AI company as your partner, you can be confident that your AI use will be responsible, ethical, and effective. Contact us today to learn more about how we can help you leverage AI in your business.