Following a public outcry, American author Jane Friedman has succeeded in persuading Amazon to remove AI-authored books which were falsely credited to her. Friedman, the author of several books about the publishing industry, was alerted by a reader to falsely attributed books listed for sale on the Amazon website. On investigation, it appears that the fakes were ‘garbage books’ created using generative AI to mimic her writing style and were marketed under Friedman’s name to take advantage of her reputation as an author.

This news follows a US class-action lawsuit by authors who claim that the creators of ChatGPT breached copyright law by training their AI model on their novels. Reports have suggested that AI-generated books are a growing problem, with a Vice investigation finding Amazon’s Kindle Unlimited young adult romance bestseller list was full of AI-generated spam books, and recent New York Times investigation finding that ‘shoddy’ AI generated travel guides had ‘flooded Amazon in recent months’. Authors on social media reacted to Friedman’s experience with their own stories of challenges in addressing falsely attributed books listed on Amazon. These developments suggest we may see a rise in claims involving the false attribution of literary works due to generative AI – but does UK law provide effective tools to protect authors against impersonation?

Friedman’s fight for a favourable fix

Freedman initially found Amazon reluctant to remove the AI-authored titles, as the platform’s IP infringement reporting tools require rights owners to provide evidence of their copyright or trade mark rights relating to the listing they are seeking to remove. This proved challenging for Friedman, as she did not have trade mark protection for her name, nor any copyright interest in the falsely attributed books (having not created it). However, Amazon appears to have relented and removed the listings for the fake titles after attracting significant attention on social media and in news reporting. While an Amazon spokesperson confirmed that the platform has “clear content guidelines governing which books can be listed for sale and promptly investigate any book when a concern is raised”, there remain concerns that less well known authors may struggle to prompt Amazon to act against other falsely attributed books.

Could copyright control counterfeits? 

As we have previously discussed, it is not yet clear whether the user of a generative AI tool would infringe the copyright of an author if they created a book ‘in the style of’ a particular author. However, in the UK (and most jurisdictions), copyright protection includes two distinct categories of rights– copyright (or, technically-speaking, economic rights) and moral rights. Moral rights protect an author’s non-economic interests and include the right for a person not to be expressly or impliedly named as the author of a work that they did not create. This right is called the right against false attribution.

The false attribution right is a powerful tool for authors in dealing with such fake books. An author would have a clear claim against a person who used generative AI to create a book and falsely attributed it to them. The false attribution right can also be exercised against a platform or reseller if it issued copies to the public, and (if it had reason to believe the book was mis-attributed, such as after it was notified by the author) it continued to possess and/or deal with such mis-attributed copies. This means that platforms may be liable for false attribution, especially if they do not promptly remove infringing books following a notification, which is possibly unfair if one considers the challenge they face in identifying such content pre-notification.

Trusting trade mark to take-down titles?

Another option for authors to protect themselves against falsely attributed books is to seek trade mark protection for their full name or surname. This has the upside of being relatively easy to use to have products taken down from platforms. Registered trade mark protection may however not be practical or cost effective for all authors.

Other options?

In addition to moral rights or obtaining trade mark registrations, it may be possible for authors to rely on other rights to take action against books falsely attributed to them. For example, as early as 1913 an author succeeded in UK defamation and passing off action against a magazine which had published a story that they had attributed to him, but which had actually been written by a grocer’s assistant from Bournemouth. The well-known author claimed that the story was of inferior quality and was detrimental to his reputation, and the judge directed the jury that if they came to the conclusion “any one reading the story would think plaintiff a mere commonplace scribbler” then they should find the publication defamatory. However, modern claimants in the UK would need to show ‘serious harm’ to their reputation, and - perhaps more importantly - platforms are much less willing to remove content on the basis of defamation.


The capability of generative AI is likely to revolutionise the way we work and live our lives, but (as we have previously discussed) its rapid development and uptake has given rise to many potential legal and regulatory issues. The risks to authors of false attribution and generative AI are reminiscent of those facing performers and the music industry, and regulators in the UK and EU continue to respond to the challenges posed by this disruptive technology. For authors, disputes such as these underline the increasing need to take a more active role in managing and enforcing their full suite of intellectual property rights, including moral rights.

Platforms have no interest in their catalogues being filled with AI-generated spam either. Even OpenAI cannot reliably spot AI-generated text, and platforms face an unenviable task in trying to weed out such content.