AI is machine intelligence that lacks a basic understanding of things and actions – in other words, common sense. But the picture is changing as Defense Advanced Research Agency- DARPA and Seattle-based Allen Institute for AI team up to define the problem and kindle progress on the issue through their ‘Machine Common Sense’ program.
“The absence of common sense prevents an intelligent system from understanding its world, communicating naturally with people, behaving reasonably in unforeseen situations, and learning from new experiences. This absence is perhaps the most significant barrier between the narrowly focused AI applications we have today and the more general AI applications we would like to create in the future,” explained DARPA’s Dave Gunning in a press release.
Encompassing common sense in AI systems will be remarkably difficult to define and test, given how broad the concept is. Common sense could be anything from knowing facts to understanding emotions. Though this may seem obvious for any human older than a few months, they’re actually quite sophisticated constructs involving multiple concepts and intuitive connections.
AI systems, here, must be trained to identify connections between the facts served based on observations made in the past. That’s why DARPA’s proposal involves building “computational models that learn from experience and mimic the core domains of cognition as defined by developmental psychology. This includes the domains of objects (intuitive physics), places (spatial navigation) and agents (intentional actors),” he said.
The real challenge, however, is when it comes to testing these efforts. “…how to put this on an empirical footing. If you can’t measure it, how can you evaluate it?” Mr. Gunning explained. “This is one of the very first times people have tried to make common sense measurable, and certainly the first time that DARPA has thrown their hat, and their leadership and funding, into the ring.”
In DARPA’s AI2 approach, the machine learning models are presented with written descriptions of situations and several short options for what happens next – for example, what happens next after a woman takes a seat at the piano? We all know that she will now nervously place her fingers on the keys, but, it’s quite a difficult problem for machines to solve. Luckily current models are getting it right about 60 percent of the time.
There are 113,000 of these questions, but this is just the first data set of several, told Oren Etzioni, head of the Allen Institute for AI. “This particular data set is not that hard… I expect to see rapid progress. But we’re going to be rolling out at least four more by the end of the year that will be harder,” he added.