Artificial intelligence (AI) tools develop quickly and this has left many unanswered questions about the rules around their usage. In this article, we examine how these tools are currently being regulated

AI has always attracted plenty of controversy. Tools that in some way make use of AI are currently being used by the biggest tech companies for a multitude of purposes, and as a result of this relatively unchecked usage, countless very real concerns exist across the board. 

One example is AI in imaging, which has seen significant news coverage in recent months. There have been concerns around copyright, such as the potential threat that AI super-resolution technology could pose to the security of image assets, as well as the question of how AI-generated images should be attributed.

Privacy in AI imaging has also become a hot topic, with the State of Texas suing Facebook in February 2022 for the misuse of facial recognition AI technology. Furthermore, Getty Images implemented an industry-first model release in March 2022 that protects the privacy of a subject’s biometric data from AI technologies.

With such scattered activity surrounding the regulation of this fast-developing and often complicated area, it can be hard to keep up with exactly where you stand when it comes to the regulation of AI, whether you’re an owner, developer, or user of the technology.

Below we provide an outline of EU, UK, US, and Chinese AI regulations, along with links to this and other AI regulations around the world.

AI regulation in the EU

The EU was the first of the big global players to draft a regulatory framework for governing the development and use of AI. It has been developed as part of the EU’s approach to artificial intelligence, which focuses on ensuring excellence and trust in AI.

The EU AI Act was first published in April 2021 and describes its aim as ensuring AI applications reflect EU values and protect human rights. As such, the law splits AI applications into four areas of risk: minimal risk, low risk, high risk, and unacceptable risk.

Technology deemed to pose an unacceptable risk includes any systems considered by the EU to be “a clear threat to the safety, livelihoods and rights of people” and would be subject to an immediate ban.

High-risk applications would have strict rules imposed, while limited-risk applications would need to adhere to specific transparency obligations.

Systems that are deemed to pose minimal or no risk are allowed to be used freely. Examples provided by the EU of this type of system include AI-enabled video games and spam filters.

Critically, the EU AI Act has been designed to evolve with the ever-changing nature of AI technology. As such, the rules would be adaptable according to how the technology develops. This means providers would need to perform ongoing assessments to ensure they are continuing to work within the law.

AI regulation in the UK

While the UK Government has not yet released a legal framework, it has laid out a 10-year National AI Strategy for developing the technology within its borders.

In its own words, the UK Government seeks to position its territory as “the best place to live and work with AI; with clear rules, applied ethical principles and a pro-innovation regulatory environment.”

The first major step in the UK’s attempts to become a global voice of authority on AI regulation came by way of a roadmap to an effective AI assurance ecosystem. This detailed document sets out the Government’s plan to, in its own words, create a “thriving and effective AI assurance ecosystem within the next five years.”

This was followed by the announcement of a new AI Standards Hub in January 2022. This government initiative will be piloted by The Alan Turing Institute, the British Standards Institution (BSI), and the National Physical Laboratory (NPL), and its stated aim is to provide an online resource for educational materials and practical tools designed to help organizations “develop and benefit from global standards.”

While all of this is still in its early stages, these efforts show that the UK is certainly serious about establishing itself as a global authority on AI. Whether or not it achieves that goal – and exactly what it will look like – remains to be seen.

AI regulation in the USA

While there is currently no regulation in place at the federal level in the US, there has been a lot of activity across various government departments that aims to address concerns around AI.

For example, the Federal Trade Commission wrote a blog post advising companies on how they should operate fairly in the age of AI, which hints at future rules. Also, the US Equal Employment Opportunity Commission launched an Initiative on Artificial Intelligence and Algorithmic Fairness to ensure AI used in the employment process adheres to human rights laws.

However, the most recent and decisive step saw Congress instructing the National Institute of Standards and Technology (NIST), part of the US Department of Commerce, to work with public and private sectors to develop the AI Risk Management Framework (AI RMF).

This framework takes recommendations from the National Security Commission on Artificial Intelligence and NIST’s own paper US Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools to create guidelines that will, according to NIST’s website, help “improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services and systems.”

All things considered, while the US’s overall position seems somewhat fragmented, there are sure signs it is taking steps towards overarching national regulation. Indeed, with the diplomatic challenges the EU faces in finding agreement from all member states, it could end up overtaking the EU AI Act in its implementation.

AI regulation in China

While progress in the EU, the UK, and the US seems to be picking up pace, progress in China has moved significantly faster, with laws regulating AI coming into force on March 1, 2022.

The Internet Information Service Algorithmic Recommendation Management Provisions (translated text available here), created by the Cyberspace Administration of China (CAC), introduced new rules on the use of algorithms to make recommendations.

These are overarching regulations that target all forms of algorithms designed to provide information to users. However, the following specific algorithmic recommendation technologies are mentioned: generative or synthetic, personalized recommendation, ranking and selection, search filter, and dispatching and decision-making.

The rules aim to safeguard national security and social interests, with a particular focus on combating the dissemination of disinformation and preserving the safety of minors and the elderly.

The above translation describes its purpose as aiming to “carry forward the Socialist core value view, safeguard national security and the social and public interest, protect the lawful rights and interests of citizens, legal persons, and other organizations, and stimulate the healthy development of Internet information services.”

While China’s AI regulations may not be mirrored by the west, there’s little doubt that governments around the world will be paying keen attention in the coming years.

AI regulation around the world

This article has focused on the activities of a few of the world’s biggest regulatory superpowers, although efforts to regulate AI are going on within many territories around the world.

This dashboard from the Organisation for Economic Co-operation and Development (OECD) provides a useful resource for finding out the current state of play in territories around the globe, listing over 700 policy initiatives and strategies from over 60 countries and territories, plus the EU.

The future of AI regulation

While there is plenty of activity concerning AI regulation around the world, at this early stage it is impossible to say exactly how things will look moving into the future, especially when dealing with such fast-moving technology.

Indeed, the ever-evolving nature of AI and the resulting fluidity of the proposed regulation makes it very difficult to predict the future.

Granted, there is a clear synergy across all proposals in their aim to protect common values, prevent disinformation, and shield the most vulnerable in society from harm. But it’s important to remember that we are a long way from global regulation, and when dealing across borders, cultures, and governments, there is always an element of subjectivity.

With this in mind, whether you are using AI or developing it, in such a hyper-connected digital arena it’s essential to familiarize yourself with the rules of the territories in which the technology was created.

LEGAL NOTICE: The information contained in this article is intended to be used and must be used for informational purposes only and does not in any way constitute professional legal advice. If you are unsure of the law, always take independent legal advice from a professional.

SmartFrame’s image-streaming technology is revolutionizing online image display to ensure maximum security and user experience while creating a brand-new revenue stream for the photography industry. Learn more

 

 

Related articles