In order to safeguard both consumers and competitors, the watchdogs have identified a set of generally accepted standards.
In order to “protect competition and consumers” in the field of artificial intelligence, regulators in both the United States and Europe have outlined the “shared principles” that they want to adhere to. “Guided by our respective laws, we will work to ensure effective competition and the fair and honest treatment of consumers and businesses,” declared the Department of Justice, the Federal Trade Commission, the European Commission, and the Competition and Markets Authority (CMA) of the United Kingdom together.
“Technological inflection points can introduce new means of competing, catalyzing opportunity, innovation, and growth,” wrote the agencies in a joint statement. “[T]hese points can be used to introduce new ways of competing.” “Accordingly, we must work to ensure the public reaps the full benefits of these moments.”
The three criteria that the authorities identified as being essential for ensuring the protection of competition in the artificial intelligence area are fair dealing (which means ensuring that large participants in the industry avoid exclusionary practices), interoperability, and choice. They used their previous experience working in relevant markets as a basis for these considerations.
A number of potential threats to competition were also presented by the agencies. These threats included transactions between large companies in the market. “These partnerships and investments could be used by major firms to undermine or co-opt competitive threats and steer market outcomes in their favor at the expense of the public,” they said. This is despite the fact that arrangements between companies in the sector, which are already widespread, may not have an impact on competition in some certain instances.
A number of additional threats to competition are mentioned in the statement. These threats include the consolidation or expansion of market dominance in areas related to artificial intelligence (AI), as well as the “concentrated control at key inputs.” The latter is defined by the agencies as a small number of corporations that have the ability to exert an enormous amount of influence over the artificial intelligence sector due to their control and supply of “specialized chips, substantial compute, data at scale, and specialist technical expertise.”
The Consumer Market Authority, the Department of Justice, and the Federal Trade Commission have all stated that they will be on the alert for potential dangers that artificial intelligence may bring to consumers. According to the statement, it is essential for customers to be kept informed about the ways in which artificial intelligence (AI) is incorporated into the goods and services that they purchase or make use of. “Firms that deceptively or unfairly use consumer data to train their models can undermine people’s privacy, security, and autonomy,” according to the official statement. “Firms that use business customers’ data to train their models could also expose competitively sensitive information.”
These are all pretty basic comments regarding the common approach that the agencies will take to promote competition in the artificial intelligence area; yet, given that they all operate under separate laws, it would be difficult for the statement to go into the specifics of how they will regulate. The remark should, at the very least, serve as a warning to businesses who are working in the field of generative artificial intelligence that authorities are keeping a careful eye on things, even in the midst of rapidly accelerated breakthroughs in the domain.