The OpenAI Research Institute – in which Microsoft has recently invested one billion euros – has just released the full version of GPT-2, artificial intelligence capable of writing intelligible texts from a few pieces of information.
To achieve this, researchers have trained an unsupervised machine learning algorithm on more than 8 million web pages. The result is a model of more than 1.5 billion parameters.
The organization could have made this publication a long time ago, but it preferred to go in stages, considering that this technology had a significant potential malicious.
In February, OpenAI researchers released a first, restricted version, based on 124 million parameters. In May and August, they released less restrictive versions, with respectively 355 and 774 million parameters. “So far, we have not seen any solid evidence of abuse”, says OpenAI in a blog note, to justify this last publication.
Texts almost of human quality
GPT-2 can create texts from anything. Some are a bit repetitive and inconsistent. Others, on the other hand, are frankly impressive. People surveyed by Cornell University attribute to the texts created by GPT-2 a score of “credibility” of 6.91 on a scale of ten. So we are getting closer to a “human” quality, so to speak.
To convince yourself, just go to Talktotransformer, which provides a Web implementation of this artificial intelligence. You can create dummy recipes, imaginary Lord of the Rings passages, HTML code, software documentation, and more. Below a text generated from a simple sentence. The result looks, to be mistaken, a classic newspaper article.
“Today, a big building in Paris was burned to the ground, but nobody got hurt.”
“The fire was started after a bomb at the Place de la République, a popular shopping area that is located in the center of the city.
An Islamic State cell was discovered in a neighborhood on the outskirts of Paris. There were arrests, but no major incidents occurred. France continues to be targeted by terrorist extremists over the past decade.
Terrorism is now so common in France, it is no longer just a threat to France. The nation has become the world’s most popular destination for terrorists, as well as its support for the country’s policies, high unemployment, and strong support of the French government’s anti-terrorism policies.”
The risk of abuse, of course, has not disappeared. On the contrary, researchers from the Middlebury Institute of Terrorism, Extremism, and Counterterrorism (CETC) have shown that it is possible to adapt GPT-2 to make it a generator of propaganda texts, by training it on specific ideological corpora.
Four fields have been explored: white supremacism, Marxism, jihadism, and anarchism. The tests carried out showed that by specializing the model on an ideological universe, the texts became even more credible. The synthetic nature of these texts could only be recognized automatically on a small percentage of them.
Soon a Web filled with synthetic messages?
In information manipulators, GPT-2 could thus quickly become the “killer app”. Troll factories could replace their employees with software robots. Trolls may be tempted to overwhelm social networks with logorrhea of artificial texts, to capture as much discussion space and available brain time as possible.
Fortunately, this technology is not (yet) foolproof. The CETC researchers’ study shows that while synthetic texts are detectable, they are 100% accurate, leaving no room for doubt. The malicious actors who would use such an artificial intelligence could be unmasked easily enough, provided they can analyze a large number of texts.
Finally, the research will continue and the models will be perfected. During the summer, Nvidia researchers managed to create a GPT-2 model with more than 8 billion parameters, significantly improving text quality. Detection may, therefore, become more and more difficult.