AI generated content: Why is OpenAI’s new language model “too dangerous to release”?
4 min to read

AI generated content: Why is OpenAI’s new language model “too dangerous to release”?

Date
26 April 2019

By Nikhil Vyas, Bird & Bird

From personal assistants to customer services, AI that can talk to us is already big business. But what happens when AI gets a bit too good at pretending to be a human?

OpenAI announced recently that they had created an algorithm which generates incredibly humanlike content. They soon realised that the humanlike interactions that their AI could generate might fall into the wrong hands and announced that they would not be releasing the code to the public. Instead, they only released a smaller, more restricted sample.

Confirming OpenAI’s worst nightmares, some companies and individuals have already begun adapting this smaller, pre-trained AI, and creations such as Todorov’s Facebook API are beginning to surface. Whilst Todorov’s API simply creates a goofy, mostly nonsensical chatbot, the fact that the restricted code is being modified and put to use in new ways may herald a new wave of realistic AI.

One of OpenAI’s main concerns was that their bot could be used to generate fake news, a problem that has plagued the internet since its inception but which has only come to the fore in recent years. With this recent advancement in realistic AI and chatbots, it will only be a matter of time until AI-created fake news starts becoming indistinguishable from real news. Aside from being wholly unethical, once AI starts learning more about the world around it and how to create sensational news, what happens when it starts making wild, untrue allegations against individuals and companies?

Liability for AI

The world of sensational journalism continues to have its fair share of defamation cases, and fake news is also being targeted and taken down, with the perpetrators being fined and/or taken to court. Whereas some may post fake news to suit a political agenda, others would simply do it to gain fame or notoriety, but governments and individuals are gaining ground in deterring fake content creators, and some ground is being made.

What then happens if a content-creating AI posts a defamatory story? Does the creator of the algorithm get taken to court? What about the person who trained the AI to create a certain type of content or the person who set the AI running? These questions are not straightforward to answer, especially when dealing with an open-source AI code that can be adapted and fed data by anyone and everyone. Due to the nature and complexity of the algorithms behind AI it may be difficult to distinguish who is ultimately at fault if the AI produces something unexpected which turns out to be defamatory.

While defamation may be less of an issue with a chatbot, where the interaction is normally limited to a one-on-one chat rather than an article which could potentially gain the viewership of millions, AI enabled fraud, is becoming a growing concern.

AI Fraud

AI developers are focusing on making interactions more real, moving from basic mimicking to fluid interactions and dynamic conversations. With these advancements AI could potentially be more convincing than a human con-artist, mimicking with ease the style and mannerisms of a particular individual. Combine this with the volume of data we post publicly online and our reliance on email and instant messaging and it’s not hard to see how an AI could mimic a family member of an elderly relative and convince them to part with funds, or set up a fake online transaction for the purposes of harvesting data.

It’s easy to see then why OpenAI refused to release their code into the public domain, for fear of inadvertently creating a monstrosity, though the decision has been met with disagreement too. The language model that OpenAI has created is clearly advanced, and has capabilities above and beyond most other available algorithms. With the right people and the right intentions, this AI could be used to develop and evolve our current understanding of technology. But perhaps, for now, it is best kept under lock and key, until a time when people better understand the technology they’re dealing with.

Join Bird & Bird, on the 30th April 2019, for our event AI: The Power to transform, as part of the Advance Series with The Telegraph.

To Register your interest click here >>

Share
Written by
Clarity Admin
Clarity Admin
Related articles
Smart Contracts – Recognising and Addressing the Risks
4 min to read
29 December 2021
Smart Contracts – Recognising and Addressing the Risks
Smart contracts, where some or all of the contractual obligations are defined in and/or performed automatically by a computer program, are expected to have a significant impact on the way business is...
Technology Projects: Managing the Risks of Innovation and Change Part 3: Contract Reset and Dispute Resolution
Technology Projects: Managing the Risks of Innovation and Change Part 3: Contract Reset and Dispute Resolution
Customers in long-term technology projects can find that while they have been working towards their chosen solution a more advanced, cheaper, or simply more desirable technology has become available....
Digital dispute resolution rules to facilitate rapid and cost-effective resolution of disputes involving novel digital technologies
Digital dispute resolution rules to facilitate rapid and cost-effective resolution of disputes involving novel digital technologies
While some saw the development of products using blockchain technology leading to the demise of disputes, the reality is that disputes in the arena of digital technology are increasing in number. Lawtech’s...
Technology Projects: Managing the Risks of Innovation and Change Part 2: During the Life of the Project
Technology Projects: Managing the Risks of Innovation and Change Part 2: During the Life of the Project
Customers in long-term technology projects can find that while they have been working towards their chosen solution a more advanced, cheaper, or simply more desirable technology has become available....
Cookies
We use analytics cookies to help us understand if our website is working well and to learn what content is most useful to visitors. We also use some cookies which are essential to make our website work. You can accept or reject our analytic cookies (including the collection of associated data) and change your mind at any time. Find out more in our Cookie Notice.