Campus & Community

Artificial Intelligence Expert Etzioni: To Harvest the True Potential of AI, Get Beyond Hype and Hysteria

With ominous headlines asking if AI could “wipe us out” and billionaire entrepreneur Elon Musk and celebrity scientist Stephen Hawking issuing dire warnings about the threat of artificial intelligence (AI) to human existence, there has been plenty of public hand-wringing about the future impact of AI-powered technology.

But doomsday scenarios that focus on AI running amok have more to do with science fiction than the actual reality of AI, says Dr. Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence (AI2) in Seattle. An accomplished entrepreneur and computer science professor at the University of Washington, Etzioni visited Stevens Institute of Technology October 4 as the tenth speaker in the President’s Distinguished Lecture Series (PDLS).

Stevens Institute of Technology: President's Distinguished Lecture with Oren Etzioni

Etzioni’s stimulating talk, titled “Is Artificial Intelligence Good or Evil?,” continued a fascinating dialogue on AI that began earlier this year when Google research director Dr. Peter Norvig also addressed a Stevens audience. 

When AI poses a danger  

Etzioni began his lecture with a broad overview of the tremendous advances made in AI technology, ranging from an AI program beating the (human) world chess champion to the rise of skilled speech recognition programs such as Siri and Alexa.

Still, these successes, or “AI savants” as he refers to them, are mostly limited to highly-structured tasks, hardly the human-like robots depicted in HBO’s Westworld

Making the important distinction between intelligence and autonomy, he says, is critical to understanding just when AI becomes harmful.

Computer viruses, for example, lack intelligence, he noted — yet can potentially cause massive damage.

“Their ability to go from one machine to another over the internet and duplicate themselves comes from their autonomy," Etzioni explained.

While computer viruses are harmful, what's even more frightening is the possibility of AI weapons that could one day make life-or-death decisions without human intervention. Then, again, he noted, AI weapons might also make better decisions than the ones guided by humans and human error.

“Intelligence in weapons can actually prevent mistakes like what we’ve had," he stressed, "and do have where innocent civilians get killed. The key thing we want to avoid are autonomous weapons.”

Slowing down certain advances in AI

The economic impact of AI on the labor market is a real concern for Etzioni, one he says merits serious discussion.

“The chief economist at Google has said that old jobs are going to go away, but that new jobs are going to come and replace them, but I don’t think it’s going to be that simple,” he worries.

Because it’s not clear what impact AI will have on society, some have argued for hitting the pause button on the technology. Etzioni doesn't go that far. Placing a moratorium on the technology – by imposing a tax on robots, for instance, as Bill Gates has suggested – would be unwise, he says, pointing to China and its desire to become the world leader in AI technology by 2030.

“AI is very much a global phenomenon. [Russian President Vladimir] Putin has said the leader in AI will rule the world," reminded Etzioni. "So if we slow down AI progress in this country, we very much do so at our peril. Right now we have something of an edge.”

The rise of terrorism and rogue nations also makes a powerful argument for not giving up America's current edge in the field.

“The one thing I fear more than highly powerful AI weapons in the hands of our military is highly powerful AI weapons in the hands of rogue nations, or in the hands of terrorists," he said. "So I think we benefit from healthy competition in this area.”

Reducing medical errors, hunting for cures, powering intelligent vehicles

In fact, most AI technologies can’t come soon enough for Etzioni, who outlined the many ways AI could soon be used to save lives. 

“The third leading cause of death in American hospitals is some kind of doctor error," he noted. "AI-based information systems that can analyze what’s happening in hospitals and detect potential mistakes can save an enormous number of lives.” 

Etzioni also explained how a team of AI2 researchers has created an AI-based search engine called Semantic Scholar to help scientists extract useful information from the thousands of scientific papers published worldwide weekly, a task that would be impossible for any human being to do even in a lifetime.

“The cure for an intractable cancer may be buried in those papers,” he said. 

Etzioni also discussed the use of AI in autonomous vehicles. Whether it’s texting and driving, DUI or drivers with impaired ability, safe-driving technology could one day mean the difference between life and death, he noted. Technology in that area is advancing so rapidly that he believes governments are failing to keep pace.

“Seattle announced that it was proposing to have a corridor between Seattle and Vancouver just for autonomous cars by the year 2040," he noted. "For me, that’s way too slow.”

Keeping us safe from AI's worst moments

To prevent the sort of AI disaster depicted in films like Stanley Kubrick’s 2001: A Space Odyssey, where an AI-type system becomes self-aware and assumes control of a spacecraft, Etzioni says installing an impregnable "off" switch is essential to any AI system.

Regulation can also help rein in the worst uses and abuses of AI technology. Etzioni has proposed, in a New York Times editorial, that AI-driven systems should be subject to all the laws that would apply to those same systems' human operators or manufacturers.

“If my AI car crashes into yours, ‘my AI did it’ is not an excuse," he said. "We have to take responsibility for our intelligent cars the same way as we would take responsibility for our unintelligent cars.”

It’s also becoming easier and easier for an AI to pose as a human being.

"We see this in Facebook, we see it in Twitter bots," Etzioni remarked. "We should have a rule that a computer system should engage in full disclosure and disclose that it’s an AI system.”

Join us in the spring for the next President’s Distinguished Lecture Series on Wednesday, January 31, 2018, featuring Dr. Tom Mitchell, the E. Fredkin University Professor in Carnegie Mellon University's Department of Computer Science.

For more information about the President’s Distinguished Lecture Series, please visit stevens.edu/lecture.