When Artificial Intelligence throws up real-world problems

AI-generated rip-offs of books, fake memoirs and the use of deep fakes pose grave ethical and social issues for the media, publishing and advertising sectors, industry watchers shared

by Kanchan Srivastava
Published - October 09, 2023
6 minutes To Read
When Artificial Intelligence throws up real-world problems

Several international authors were recently left in for a rude shock after Artificial Intelligence (AI)-generated bogus books in their names were found selling on e-commerce giant Amazon.

Among them was writer-journalist Rory Cellan-Jones who saw his memoir on Amazon. It was apparently generated by AI-powered ChatGPT. Another author, Jane Friedman, found five bogus titles generated by AI in her name on Amazon. One ‘Steven Walryn’ put up 15 titles in a day for sale on Amazon.

The e-commerce firm has removed such fake books since, but the emergence of AI-generated artwork and copyrighted content have raised legitimate concerns within the writers' community and the broader creative industry.

These incidents have come at a time when some authors have filed cases in the US court against Microsoft-backed OpenAI, Microsoft, Meta Platforms and other forms for using their copyrighted work in training its large language model (LLM) without their consent, compensation or credit.

The outcomes of these lawsuits could set important precedents for the publishing industry regarding AI, copyright and privacy that shape the regulatory landscape in the future. OpenAI could face significant financial penalties if the court favours the plaintiffs, which may hurt OpenAI’s financial stability and ability to raise funds, legal experts opine.

Interestingly, AI has co-authored several books such as, “The Inner Life of an AI: A Memoir by ChatGPT”, whose cover declares-written by ChatGPT, prompted by Forrest Xiao (a data scientist) and “Think Different: A Step-by-Step Guide to Building the Next Apple” by Bakari Powell and ChatGPT.

“With the further prevalence of generative AI (genAI), there may be a reduced demand for human-authored content which may lead to strained revenues for artists/writers”, says Abheek Biswas, AVP Consumer Insights, dentsu India, who is also an artist.

The rising incidents of misuse of “deepfakes” also emerge as a threat for celebrities. While deep fakes were around before ChatGPT, AI has increased the cases of misuse. These deepfakes are extremely convincing – and if people take them seriously, the reputational damage could be severe.

Renowned Hollywood actor Tom Hanks last week issued a warning to his followers on Instagram about a deepfake ad.

CBS news anchor Gayle King also issued a similar warning this Monday when she found her deep fake video was being circulated on the internet.

Meanwhile, Rahul Vengalil, Executive Director of Everest Solutions, a Rediffusion group company, said, “Generative AI has been helping the advertising and media industry to a great extent. It makes us efficient, takes up our mundane jobs and produces stunning arts and videos. However, frauds are going to exist in the AI domain as the entire digital ecosystem is full of frauds. The extent of frauds in digital media buying is insane though verification tools and measurements have brought them down now.”

AI industry leaders are also aware of the risks of their tools. A couple of months ago, OpenAI chief Sam Altman and Microsoft Founder Bill Gates, signed a one-sentence statement reading: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Political disinformation

The technology is not just being misused by quacks or non-state actors. As generative AI tools grow more sophisticated, political actors are also using them to amplify disinformation. Massive political propaganda is being churned out with the help of manipulated videos and deepfakes to discredit the opposition, critical voices and media, media experts say.

India saw a significant rise in the use of fabricated images and videos for political propaganda this year. “For example, a fake video showing Muslims attacking a Hindu temple went viral on social media, causing religious tensions. This trend highlights the urgent need for effective measures to combat the spread of digital misinformation,” says Advit Sahdev, digital marketing and advertising expert.

According to a recent report “Freedom on the Net” by Freedom House, “Governments and political actors around the world, in both democracies and autocracies, are using AI to generate texts, images, and video to manipulate public opinion in their favour and to automatically censor critical online content.”

While AI technology offers exciting and beneficial uses for science, education, and society at large, its uptake has also increased the scale, speed, and efficiency of digital repression. AI has enabled governments to conduct more precise forms of online censorship by removing unfavoured political, social, and religious speech, the report claimed.

Over the past year, the new technology was utilized in at least 16 countries to sow doubt, smear opponents, or influence public debate, the report further claims. “Internet freedom is at an all-time low, and advances in AI are actually making this crisis even worse,” says Allie Funk, a researcher on the report.

Deep fakes have opened up the credibility of digital content to be challenged more often, says Sajal Gupta, CEO of Kiaos Advertising.

How can it be tackled?

Governments across the world are reviewing their legislation or considering bringing a new law that can protect them against exposure to content posted by others, to tackle generative AI. For instance, New York passed the New York City Bias Audit Law in January 2023, which may be used to govern LLM training data.

“The use of generative AI has created a lot of concerns around copyright and ethics. There is of course no turning the clock back. So, these concerns will need to be addressed by way of regulations. The US, the UK, Singapore, Japan and many others have already put in place legislations to protect copyright. We still don’t have a specific legislation in place in India around generative AI but that is just a matter of time,” says Hareesh Tibrewala, joint CEO of Mirum India.

Biswas believes technology itself will present the solution to this ethical and complicated problem.

“Develop and implement more robust digital rights management (DRM) systems to track and protect the ownership of digital content. This would help prevent unauthorized AI-generated duplications of copyrighted material. Employ AI tools to assist in the verification of the copyrighted content, helping creators and publishers identify potential infringements or misuse of their work in genAI outputs,” he suggests.

Vengalil and Sahdev underscore the importance of awareness among consumers and the need for robust fact-checking and verification mechanisms in the digital age.

A few leading content houses and digital platforms have already started initiatives to educate their readers on the methods of fact-checking and digital safety, Gupta points out.

Experts call for more robust digital rights management (DRM) systems to track and protect the ownership of digital content. This would help prevent unauthorized AI-generated duplications of copyrighted material.

Biswas also hopes that blockchain technology can create immutable records of ownership and copyright information, which will also serve as irrefutable proof of original content and its creator.