FTC provides AI guidance to gaming industry to avoid deception

Video games have come a long way. They have evolved from simulated table tennis games to today’s fully immersive virtual reality games that take advantage of biometrics and artificial intelligence (AI). While the origins of using AI in games were simple – like creating more realistic non-player characters – now using AI enables so much more. AI-powered tools can be used to outsource quality assurance, gain insights based on player data, or better understand player value to maximize game retention and revenue. So now is the time come for companies to keep in mind the increased focus of regulators on the use of AI.

In the United States, the FTC has provided guidance on using AI to avoid unfair or deceptive marketing practices. For video game publishers, applied to the gaming industry, the FTC’s key considerations (which we’ve also summarized in our sister blog) include:

  • Accuracy. The AI ​​components of a game or service should be tested before implementation to confirm that they work as intended.

  • Responsibility. Businesses need to think about the impact of using AI on the end user. External experts may be used to help confirm that the data used is unbiased.

  • Transparency. End users should be informed that the company may use AI, it should not be used covertly. Individuals need to know what is collected and how it will be used.

  • Justice. To advance concepts of fairness, the FTC recommends giving people the ability to access and correct information.

Detailed state privacy laws (such as the upcoming laws we discussed in our sister blog, including California, Colorado, and Virginia) will also impact businesses’ use of AI. These laws require companies to provide individuals with opt-out rights regarding AI in automated decision-making and profiling. They also require data protection impact assessments to be carried out for processing activities that present an increased risk, such as automated processing. Consistent with the FTC’s Open Principle, the California CPRA also requires that access requests include information about the logic and results involved in these decision-making processes. NIST (a self-regulating industry body) has also proposed an AI risk management framework.

AI has come under similar scrutiny in Europe (discussed in our sister blog), where there has been an emphasis on transparency and oversight when using AI. This is especially true when automated decision making occurs. As in the states mentioned above, a risk-based approach will be required. Currently, an AI law is under consideration that would address these concerns.

Copyright © 2022, Sheppard Mullin Richter & Hampton LLP.National Law Review, Volume XII, Number 119

About Douglas Torres

Check Also

Classic Jurassic Park scene recreated in the PlayStation game

The video you are about to watch has been embedded dreamswhich is a video game …