Artificial intelligence (AI) tools develop quickly and this has left many unanswered questions about the rules around their usage. In this article, we examine how these tools are currently being regulated
AI has always attracted plenty of controversy. Tools that in some way make use of AI are currently being used by the biggest tech companies for a multitude of purposes, and as a result of this relatively unchecked usage, countless very real concerns exist across the board.
One example is AI in imaging, which has seen significant news coverage in recent months. There have been concerns around copyright, such as the potential threat that AI super-resolution technology could pose to the security of image assets, as well as the question of how AI-generated images should be attributed.
Privacy in AI imaging has also become a hot topic, with the State of Texas suing Facebook in February 2022 for the misuse of facial recognition AI technology. Furthermore, Getty Images implemented an industry-first model release in March 2022 that protects the privacy of a subject’s biometric data from AI technologies.
With such scattered activity surrounding the regulation of this fast-developing and often complicated area, it can be hard to keep up with exactly where you stand when it comes to the regulation of AI, whether you’re an owner, developer, or user of the technology.
Below we provide an outline of EU, UK, US, and Chinese AI regulations, along with links to this and other AI regulations around the world.
AI regulation in the EU
The EU was the first of the big global players to draft a regulatory framework for governing the development and use of AI. It has been developed as part of the EU’s approach to artificial intelligence, which focuses on ensuring excellence and trust in AI.
The EU AI Act was first published in April 2021 and describes its aim as ensuring AI applications reflect EU values and protect human rights. As such, the law splits AI applications into four areas of risk: minimal risk, low risk, high risk, and unacceptable risk.
Technology deemed to pose an unacceptable risk includes any systems considered by the EU to be “a clear threat to the safety, livelihoods and rights of people” and would be subject to an immediate ban.
High-risk applications would have strict rules imposed, while limited-risk applications would need to adhere to specific transparency obligations.
Systems that are deemed to pose minimal or no risk are allowed to be used freely. Examples provided by the EU of this type of system include AI-enabled video games and spam filters.
Critically, the EU AI Act has been designed to evolve with the ever-changing nature of AI technology. As such, the rules would be adaptable according to how the technology develops. This means providers would need to perform ongoing assessments to ensure they are continuing to work within the law.
AI regulation in the UK
While the UK Government has not yet released a legal framework, it has laid out a 10-year National AI Strategy for developing the technology within its borders.
In its own words, the UK Government seeks to position its territory as “the best place to live and work with AI; with clear rules, applied ethical principles and a pro-innovation regulatory environment.”
The first major step in the UK’s attempts to become a global voice of authority on AI regulation came by way of a roadmap to an effective AI