The global surge in artificial intelligence (AI)-generated art, powered by tools like DALL·E and Stable Diffusion, has captivated both artists and everyday users. From jaw-dropping digital paintings to viral visuals sweeping across social media, anyone can now create artwork by simply typing a few words. Yet, beneath this wave of creativity lies a thorny question: who actually owns the copyright of these AI-generated masterpieces? As the line between human and machine-made art becomes increasingly blurred, concerns over misuse and legal ambiguity are rising. Different jurisdictions, including the UK, are responding in various ways, while legal experts and artists warn of copyright confusion and potential criminal exploitation.
Unlike traditional art, where a painter’s brushstrokes clearly indicate authorship, AI-generated art occupies a legal grey area. Under the intellectual property laws of Hong Kong, the United States, and the European Union, AI systems are not legal persons and thus cannot hold copyright. Machines are regarded merely as tools—like a camera or a canvas. But does the user who enters a prompt such as “a futuristic city at sunset” own the resulting image? The answer isn’t straightforward.
Legal precedents indicate that the key lies in human input. In 2023, the US Copyright Office rejected a fully AI-generated image for copyright protection on the grounds that it lacked “human authorship”. Vague prompts like “draw a cat” may not meet the originality threshold required for copyright. However, detailed instructions or post-editing may change the outcome, potentially granting the user ownership. Hong Kong’s copyright framework does not yet explicitly address AI, but it similarly upholds human creativity as the basis of ownership.
The issue becomes even more complex when AI art imitates human-made work. Many AI models are trained on existing artworks scraped from the internet, often without artists’ consent. In 2024, a wave of lawsuits emerged worldwide, with artists accusing companies such as Stability AI of copyright infringement by using their work to train algorithms. AI-generated images can closely resemble the styles of well-known creators, making it difficult for the public to distinguish between genuine and synthetic. “It’s a minefield,” says Hong Kong illustrator Clara Wong. “People share AI art thinking it’s original, but they might be unknowingly plagiarising someone’s life’s work.”
More troubling is the criminal misuse of AI art. Fraudsters can exploit AI to produce convincing forgeries of famous artworks, selling them under false pretences or deceiving auction houses for profit. Another risk is the creation of deepfake images used for scams, blackmail, or misinformation. In 2024, European police dismantled a criminal network that used AI to generate counterfeit artworks, swindling millions of Hong Kong dollars. Such acts not only erode trust in the art market but may also implicate unsuspecting users who share or purchase these pieces, exposing them to potential legal consequences.
Everyday users also face risks. A small business owner who designs a logo using AI might find their image unintentionally resembles copyrighted material, risking a lawsuit. The NFT market has already seen scandals involving AI-generated art being sold as human-made, raising issues of authenticity. Worse still, if AI-generated images are used for illicit purposes—such as forging identity documents—the user could face criminal charges. These missteps can lead to costly litigation or reputational harm, particularly as AI art becomes ubiquitous on platforms like Instagram and Etsy.
The UK’s Approach
In the United Kingdom, AI-generated art falls under the Copyright, Designs and Patents Act 1988 (CDPA), which uniquely allows copyright protection for “computer-generated works” without a human author. Such works are protected for 50 years from creation, compared to 70 years for human-made art. According to Section 9(3) of the CDPA, the “author” is defined as “the person who made the arrangements necessary for the creation of the work”. This could refer to the user entering prompts, the programmer who developed the AI software, or the party supplying the training data. In 2024, the UK government launched a consultation to clarify the legal framework surrounding AI and copyright, particularly focusing on training data usage and ownership of generated content.
In contrast to the US—which requires human authorship for copyright protection, as demonstrated in its 2023 rejection of the AI artwork A Recent Entrance to Paradise—the UK’s broader definition permits protection for AI works, though the question of authorship remains unsettled. In 2024, Getty Images filed a high-profile lawsuit against Stability AI in the High Court of London, accusing the company of using copyrighted images for training without permission—highlighting the legal risks tied to data sourcing.
The UK is also addressing the criminal misuse of AI art, including fake artworks and deepfake content. In 2024, British police collaborated with INTERPOL to track down a fraud ring using AI-generated images to deceive art collectors. The government has proposed mandatory labelling of AI-generated content and is considering new laws to protect likeness rights, tackling the threat posed by synthetic media.
Policy Outlook
The UK is seeking to balance the growth of creative industries with the rapid development of AI. In December 2024, a government consultation proposed increasing transparency by requiring AI companies to disclose the sources of their training data and encouraging licensing agreements between copyright holders and AI developers. Prominent musicians like Paul McCartney have criticised the potential loosening of copyright laws, warning that it could undermine artists’ livelihoods—prompting policymakers to reconsider proposed reforms.
The public’s inability to distinguish AI from human-created art also raises cultural concerns. Without clear labelling, machine-generated visuals risk diluting the value of traditional craftsmanship—and may even be used as a smokescreen for illicit activities. To address these challenges, the European Union adopted its AI Act in 2024, mandating the labelling of AI-generated content. Hong Kong and the UK may consider similar measures to protect both the creative sector and public safety. Without such steps, markets risk becoming a murky space where truth and falsehood are indistinguishable—and crime may flourish.
What Comes Next?
Experts recommend a cautious approach. Users should document their prompts and any editing process to demonstrate their creative contribution, especially in commercial projects. They should also avoid generating images that might involve sensitive or unlawful content. Before sharing or selling AI art, seeking legal advice is advisable given the current legal uncertainty and the risks of criminal misuse.
At the same time, artists are advocating for stronger protections, calling on AI companies to obtain permission before using their works for training. Law enforcement must also enhance monitoring capabilities to detect AI-generated forgeries.
As AI art reshapes the creative landscape, one thing is clear: the future of copyright and safety will depend on a careful balance of innovation, fairness, and vigilance. Until the law catches up, creators and users across Hong Kong, the UK, and beyond must tread carefully—or risk unwittingly becoming part of a legal or ethical minefield.