Scientists help robots understand humans with board game idea
Information scientists at the U.S. Army Research Laboratory and the University of Michigan have borrowed from the popular game "20 Questions," to make an important step towards helping robots maintain continuous and purposeful conversation with humans. They have developed an optimal strategy for asking a series of yes/no questions that rapidly achieves the best answer.
In the game, a player wishes to estimate an unknown value on a sliding scale by asking a series of questions whose answer is binary (yes or no). In this way, scientists say, their research findings could lead to new techniques for machines to ask other machines questions, or for machines and humans to query each other.
ARL senior scientist Dr. Brian Sadler teamed with University of Michigan researchers Hye Won Chung, Lizhong Zheng, and Professor Alfred O. Hero to conduct the study, which appears in the February 2018 issue of the IEEE Transactions on Information Theory.
The work is part of a larger study to develop methods for machines and humans to interact.
"It is well known that artificial intelligence systems, such as those found nowadays on every smartphone, can answer at least some questions," Sadler said. "They can even win a game like Jeopardy, focusing on only one question at the time. A real, purposeful conversation, especially in complicated military environments, is different. It requires the AI system to understand a whole sequence of questions and answers, and to handle every question or answer with consideration of what has been asked or answered before. Such computer algorithms do not yet exist, and the scientific theory for building such algorithms is not yet developed."
Sadler said it is a signifcant challenge to find ways for a machine to query a human that efficiently takes advantage of the human's expertise.
"Humans are particularly good at accurately answering yes/no questions," he said. He explained that it is important to minimize the number of queries, while maximizing the value of each one, so as not to waste the human's time or endanger a soldier who has duties to perform in a dangerous environment.
The 20 questions game is a classic pastime, where players can only ask questions whose response is yes or no, while attempting to identify an object. The sequence of questions is designed so that the player can rapidly figure out the answer: "Is it bigger than a breadbox," "is it alive," and so on; however, in the Army problem, it is possible that the question may be answered in error.
"Unlike the actual 20 Questions game, we admit the possibility that a question might be answered in error," he said. "We call this the noisy 20 questions game."
ARL and University of Michigan researchers developed a method to automatically formulate a sequence of questions to narrow down the error and provide an answer to the question, "what is the value of x". The researchers have shown that their querying will achieve the minimum mean-square error between their best guess and the unknown true value of x.
Moving forward, as part of research into artificial intelligence and human-machine teaming, ARL will apply methods such as the 20 Questions paradigm to Soldier-robot teaming.
More information: H. W. Chung, B. M. Sadler, L. Zheng, A. O. Hero, "Unequal error protection querying policies for the noisy 20 questions problem," IEEE Transactions on Information Theory, vol. 64, no. 2, pp. 1105—1131, February 2018.
Provided by U.S. Army Research Laboratory