When Is Technology Too Dangerous to Release to the Public?

pic

Last week, the nonprofit research group OpenAI revealed that it had developed a new text-generation model that can write coherent, versatile prose given a certain subject matter prompt. However, the organization said, it would not be releasing the full algorithm due to “safety and security concerns.”

Instead, OpenAI decided to release a “much smaller” version of the model and withhold the data sets and training codes that were used to develop it. If your knowledge of the model, called GPT-2, came solely on headlines from the resulting news coverage, you might think that OpenAI had built a weapons-grade chatbot. A headline from Metro U.K. read, “Elon Musk-Founded OpenAI Builds Artificial Intelligence So Powerful That It Must Be Kept Locked Up for the Good of Humanity.” Another from CNET reported, “Musk-Backed AI Group: Our Text Generator Is So Good It’s Scary.” A column from the Guardian was titled, apparently without irony, “AI Can Write Just Like Me. Brace for the Robot Apocalypse.”

That sounds alarming. Experts in the machine learning field, however, are debating whether OpenAI’s claims may have been a bit exaggerated. The announcement has also sparked a debate about how to handle the proliferation of potentially dangerous A.I. algorithms.

OpenAI is a pioneer in artificial intelligence research that was initially funded by titans like SpaceX and Tesla founder Elon Musk, venture capitalist Peter Thiel, and LinkedIn co-founder Reid Hoffman. The nonprofit’s mission is to guide A.I. development responsibly, away from abusive and harmful applications. Besides text generation, OpenAI has also developed a robotic hand that can teach itself simple tasks, systems that can beat pro players of the strategy video game Dota 2, and algorithms that can incorporate human inputinto their learning processes.

On Feb. 14, OpenAI announced yet another feat of machine learning ingenuity in a blog postdetailing how its researchers had trained a language model using text from 8 million webpages to predict the next word in a piece of writing. The resulting algorithm, according to the nonprofit, was stunning: It could “[adapt] to the style and content of the conditioning text” and allow users to “generate realistic and coherent continuations about a topic of their choosing.” To demonstrate the feat, OpenAI provided samples of text that GPT-2 had produced given a particular human-written prompt.

For example, researchers fed the generator the following scenario:

In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.


The GPT-2 algorithm produced a news article in response:

The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science. Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved. Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow. Pérez and the others then ventured further into the valley. “By the time we reached the top of one peak, the water looked blue, with some crystals on top,” said Pérez.


Other samples exhibited GPT-2’s turns as a novelist writing another battle passage of The Lord of the Rings, a columnist railing against recycling, and a speechwriter composing John F. Kennedy’s address to the American people in the wake of his hypothetical resurrection as a cyborg.


While researchers admit that the algorithm’s prose can be a bit sloppy—it often rambles, uses repetitive language, can’t quite nail topic transitions, and inexplicably mentions “fires happening under water”—OpenAI nevertheless contends that GPT-2 is far more sophisticated than any other text generator that it has developed. That’s a bit self-referential, but most in the A.I. field seem to agree that GPT-2 is truly at the cutting edge of what’s currently possible with text generation. Most A.I. tech is only equipped to handle specific tasks and tends to fumble anything else outside a very narrow range. Training the GPT-2 algorithm to adapt nimbly to various modes of writing is a significant achievement. The model also stands out from older text generators in that it can distinguish between multiple definitions of a single word based on context clues and has a deeper knowledge of more obscure usages. These enhanced capabilities allow the algorithm to compose longer and more coherent passages, which could be used to improve translation services, chatbots, and A.I. writing assistants. That doesn’t mean it will necessarily revolutionize the field.