Elon Musk-backed nonprofit OpenAI has developed an AI system, a text generator designed to learn the patterns of language. The new natural language model, named GPT-2, was trained by the OpenAI researchers to “predict the next word in 40GB of Internet text,” says the company in its blog.
OpenAI’s system “adapts to the style and content of the conditioning text,” allowing users to “generate realistic and coherent continuations about a topic of their choosing.” And for the same reason, its creators have taken the unusual step of not releasing the system publicly, for fear of potential misuse.
David Luan, vice president of engineering at OpenAI, and his fellow researchers began to imagine how the system might be used for unfriendly purposes. “It could be that someone who has malicious intent would be able to generate high-quality fake news,” Luan says. And due to such concerns about malicious applications of the technology, the company said, “[We] are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper,” reads a new OpenAI blog about the effort.
A small group of researchers and media personnel was allowed to give the OpenAI’s system with human-generated text prompts to start with. The AI system continued the story and created tiers of paragraphs that very much resembled the work of disinformation artists. “Technology like this can shake up the processes behind online disinformation or troll, some of which already use some form of automation,” said Jack Clark, policy director at OpenAI.
By deciding not to publish its own code, OpenAI is hinting other AI developers to be more careful about what they develop and release to the public.