The AI world is rapidly evolving, with AI language models witnessing substantial advancements within mere months. The regulatory environment is also quickly adapting, with entities like the White House, US Congress, FTC, China’s Cyberspace Administration, and the EU moving to establish frameworks for AI. It’s not easy to keep up with new guidelines and laws, so we are here to help. Below, you will find the recent developments in the regulatory landscape regarding AI. However, keep in mind that the developments in this sector are constant, so make sure to stay updated by signing up for our newsletter, reading professional materials, and joining our Privacy Champions Slack community.
Europe
The EU AI Act (Draft)
The EU AI Act is a proposal for regulating artificial intelligence (AI) in the EU. It aims to analyze and classify AI systems based on the risks they pose and apply varying levels of regulation accordingly. The key goals are to ensure AI systems are safe, transparent, traceable, non-discriminatory, and environmentally friendly, with human oversight to prevent harmful outcomes. The Act also seeks a clear, technology-neutral definition for AI that can apply to future AI systems
Draft model contractual clauses for AI procurement by public organizations
The EU model contractual AI Clauses have been finalized to support responsible AI procurement within public organizations. These clauses aim to establish responsibilities for trustworthy, transparent, and accountable AI development between suppliers and public organizations.
The initial version underwent a review process involving expert roundtables to improve and validate the clauses, focusing on AI system requirements, transparency, explainability, auditing, and accountability. Post-review, the clauses were revised to incorporate new requirements and adjustments, such as an extended regime for data sharing among public authorities and suppliers.
Additionally, a light version targeting non-high-risk AI systems was developed to provide more flexibility. The updated clauses are now accessible on the Public Buyers Community Platform, inviting contracting authorities to test them and provide feedback. They are also being translated into all EU languages for broader accessibility.
South America
Brazil
A temporary committee of jurists in Brazil has approved a text outlining rules for regulating artificial intelligence (AI), set to be delivered to the Senate president, Rodrigo Pacheco. The text, structured around three central pillars, aims to guarantee the rights of individuals affected by AI systems, grade the level of associated risks, and plan governance measures for companies providing or operating AI systems. Emphasizing the guarantee of fundamental rights amidst Brazil’s structural inequalities, the committee, led by Ricardo Villas Bôas Cueva, also engaged civil society through public hearings to form these guidelines, reflecting a consensus from collected public opinions.
Canada
The Canadian federal government introduced draft law C-27, also known as the Digital Charter Implementation Act 2022, containing the Artificial Intelligence and Data Act (AIDA) to regulate AI systems and reduce risks of harm and biased outcomes. As of March 2023, the bill is in its second reading in the House of Commons.
United States
- The National Institute of Standards and Technology (NIST), an agency of the US Department of Commerce, released the Artificial Intelligence Risk Management Framework 1.0 on January 26, 2023, as a voluntary guide for technology companies in managing AI risks
- The 2023 legislative session saw a significant amount of activity regarding AI legislation across various states. Here’s a more comprehensive look at the AI bills and resolutions across different states based on information from the National Conference of State Legislatures (NCSL):
- California: Several bills are pending, including ones urging the U.S. government for a moratorium on training AI systems more powerful than GPT-4, establishing standards for the safe development and deployment of frontier AI models, requiring state agencies to provide notice when using generative AI to communicate with individuals, and creating an interagency AI working group among others.
- Connecticut legislated an inventory and assessment of AI systems used by state agencies to prevent unlawful discrimination or disparate impact.
- District of Columbia: Pending legislation to prohibit discrimination by algorithms, requiring notices to individuals whose personal information is used in algorithmic decision-making.
- Georgia enacted appropriations for the Georgia Artificial Intelligence Manufacturing Project and legislation relating to the control of hazardous conditions, among others.
- Hawaii adopted resolutions urging Congress to consider the benefits and risks of AI technologies.
- Illinois: Various pending and enacted legislation addressing AI in video interviews, hospital diagnostics, gambling data analytics, patient limits in healthcare, and amending the Human Rights Act to address predictive data analytics in employment decisions, among others.
- Louisiana requested a study on AI’s impact on operations, procurement, and policy.
- New York City enacted Local Law 144 (effective January 1, 2023), requiring employers using automated decision tools in recruiting and promotions to disclose the use of such tools.
- North Dakota enacted legislation defining a person, specifying that the term does not include AI, environmental elements, an animal, or an inanimate object.
- Texas created an AI advisory council to study and monitor AI systems developed, employed, or procured by state agencies, with similar councils also established in North Dakota, Puerto Rico, and West Virginia.
- On a federal level, the White House Office of Science and Technology Policy published a non-binding “Blueprint for the Development, Use and Deployment of Automated Systems” (October 4, 2022) outlining principles to minimize harm from AI systems.
- FTC published a report on consumer concerns about AI – On October 3, 2023, the FTC addressed consumer apprehensions about AI, emphasizing concerns over data usage for model training, potential copyright infringement, and consent issues regarding personal content use.
For companies interested in using AI technologies for healthcare-related decision-making, the FDA has also announced its intention to regulate many AI-powered clinical decision-support tools as devices.
Asia
- Japan is working on draft guidelines to address the overreliance on AI technology by companies and organizations. Japan has also released the Social Principles of Human-Centric AI, focusing on human-centricity, education, data protection, and other core principles. These principles are non-binding but signify the political consensus on AI opportunities and risks.
- China has been active in regulating AI, with laws like the Algorithm Provisions targeting the abuse of algorithmic recommendation systems and Draft Deep Synthesis Provisions to combat deepfakes. Additionally, the Cyberspace Administration of China is consulting on draft Administrative Measures for Generative AI Services requiring AI products to undergo a “safety assessment” before public release.
- South Korea: PIPC launches AI privacy task force – On October 5, 2023, the Personal Information Protection Commission (PIPC) announced the launch of an artificial intelligence (AI) privacy task force. Importantly, this task force will serve as a focal point for communication and cooperation between the government and the private sector in the field of privacy in AI. Additionally, the task force will provide interpretation of personal information protection laws for individual cases in the rapidly changing AI environment and aim to resolve uncertainty by presenting specific standards.
Israel (yes, Israel is in Asia) published a 115-page draft AI policy in October 2022. Israel’s Ministry of Innovation, Science and Technology, along with the Ministry of Justice, has drafted a policy to regulate AI, emphasizing ethics, risk management, and global alignment. The policy promotes “soft” regulatory tools, and ethical principles similar to global standards and establishes a government knowledge center for AI regulation.
Australia & New Zealand
Australia
Australia is mandating search engines to curtail the sharing and generation of AI-created child sexual abuse material. A new code requires search engines to ensure such content isn’t displayed in search results and prohibits AI functions within search engines from producing synthetic versions of this material.
New Zealand
The New Zealand government is working to ensure the responsible use and development of AI in the country. They have published an “Algorithm Charter for Aotearoa New Zealand” which sets out standards for the use of algorithms by public agencies.
Other International Efforts
- G7 leaders are expected to establish international AI regulations by the end of the year under a coordinated approach known as the Hiroshima AI Process.
- The Universal Guidelines on Artificial Intelligence (UGAI) have been proposed to address the growing challenges of intelligent computational systems, aiming to promote transparency and accountability.
- UNESCO introduced a global standard on AI ethics, adopted by all 193 Member States, to guide the ethical development and utilization of AI.
Noa Kahalon
Noa is a certified CIPM, CIPP/E, and a Fellow of Information Privacy (FIP) from the IAPP. Her background consists of marketing, project management, operations, and law. She is the co-founder and COO of hoggo, an AI-driven Digital Governance platform that allows legal and compliance teams connect, monitor, and automate digital governance across all business workflows.