Regulation Comes to AI

by Chase Young, Cornell ’24 (contributions by Gabriel Mallare, Cella Kamarga, Yixuan Wu)

By: Staff
A picture of a white and mirrored building with European Union countries’ flags on its façade, taken from outdoors at the ground level.

EU Parliament Buildings (CC BY-SA 2.0 stevecadman)

Advances in generative AI have captivated businesses and the general public. Tools like  OpenAI’s GPT-4 generate realistic images and create text that mimics that of a human. While many executives have wondered about the best way to apply generative AI to their business, there is now a new wrinkle: regulation. On March 13, the European Parliament passed the Artificial Intelligence Act, which contains rules and regulations for generative AI and other uses of machine learning. The bill represents the most comprehensive effort to regulate artificial intelligence to date, but it will not be the last. India, Mexico, and other emerging markets are considering their own AI regulations.

The Artificial Intelligence Act

The European Union’s bill bans several AI practices including systems that purposely manipulate others; use biometric data to deduce personal information; scrape facial images from the internet; or infer emotions in the workplace. For these practices, fines of 35 million euros or 7% of worldwide turnover are possible. The bill also designates several “high-risk” applications of AI in areas like critical infrastructure, education, employment, and law enforcement. These systems must have risk management and documentation systems, and if they are in violation of the regulations, companies can face fines of 15 million euros or 3% of worldwide turnover. While most companies will have 36 months to become compliant, executives with exposure to the EU should start thinking about their AI or machine learning use cases.

Current Regulation in Emerging Markets

Countries in emerging markets from the BRICS (Brazil, Russia, India, China, and South Africa) to Mexico are actively considering AI regulation. Brazil is evaluating a bill (2338/2023) modeled after the EU bill, banning AI practices including manipulation and social scoring (risco excessivo) and designating other AI practices as high-risk (alto risco). Russia announced an AI ethics code. India recently issued guidance on AI, including that platforms should not “permit any bias or discrimination” and that “under-tested/unreliable artificial intelligence models” must have “explicit permission of the government.” India is expected to follow the guidance with formal legislation. China issued “interim measures” for artificial intelligence, delegating responsibility to industry regulators, but it is working on formal legislation that would involve a permit system. Mexico is considering a bill that creates an autonomous council of citizens and technology experts to make decisions relating to AI ethics. As nations continue to prioritize AI in their legislative agendas, businesses should start preparing their strategy to address AI regulation.

Business Considerations for AI Products in Global Markets

Should businesses censor an AI product to gain government approval?

Some governments may require approval for models or ask that AI models align with national values. It is possible to align AI with human values, but it is not always guaranteed. In some markets, businesses may ask themselves if it is worth launching a product that no longer aligns with their own values.

Should businesses launch a product where the data source is unclear or insecure?

The EU regulation specifies special care in the use of personal data for AI models. When training new models, does the business know the data source? Could a business unknowingly be using a dataset that contains copyrighted content or highly personal information?

When do machine learning use cases become high-risk?

A business may already be using AI in the form of machine learning. However, the EU now labels some specific machine learning uses such as biometrics, education, employment, critical infrastructure, and creditworthiness as “high-risk.” As the EU legislation is likely to influence legislation elsewhere, businesses should audit how they use machine learning and whether their uses would be considered “high-risk.”

Will businesses do more than regulations require?

Businesses can take accountability and commit to ethical AI principles such as promoting transparency, security, and privacy. Notable companies from Google to Walmart have put forth pledges on how they will use AI. A pledge can help reassure the public on how a company is incorporating AI.

Research Framework

The Emerging Markets Institute will continue to investigate this area, particularly focusing on the degree of consumer protection in different emerging markets and to what degree businesses will be required to align with government interests.

About the Authors

Headshot of Chase Young.
Chase Young ’24

Chase Young, ’24, Cornell Jeb E. Brooks School of Public Policy. Before doing research for the Emerging Markets Institute, Young interned in the global finance and business management program at JPMorgan Chase and was a research intern for the World Bank’s data development group. Young’s research focuses on the implications of new technologies in emerging markets.

Yixuan Wu ’26, Charles H. Dyson School of Applied Economics and Management

Cella Kamarga ’26, Cornell Jeb E. Brooks School of Public Policy

Gabriel Mallare ’26, Cornell Jeb E. Brooks School of Public Policy