Our perception of robots can vary greatly from individual to individual, but despite these disparities, most of us can at least agree upon one thing and that is robots should always obey the commands of their master. However, once you introduce human fallibility and cognitive dissonance into the discussion, the image changes slightly and the next question to arise is: should robots really “mindlessly” obey every human command?
Academics at the Tufts University’s Robot Interaction Lab strongly believe that the answer to this is no. According to these professionals, robots should have the ability to “discern” between risk-free commands and any requests with potentially dangerous consequences.
Scientists such as engineer Gordon Briggs and Dr. Matthias Scheutz have started teaching machines when and how to say “no” to humans in the laboratory, in a bid to endow them with a rudimentary form of free-will.
Researchers have approached this task by asking themselves a simple question – how do human subjects react to being asked to do something? Once a task is assigned to a human, a sort of internal monologue ensues in the person’s mind. Questions like “Do I have the necessary qualifications for it?”, “Does the task pose any kind of danger?” or “Will it affect me or someone else and how?” will need to be answered, before the individual makes a decision on whether to perform the task or not.
In response to these important concepts, Robotics researchers have tried to replicate this complex train of thought inside the artificial brains of robots by developing a complex algorithm that allows the machines to perform an internal evaluation of the human request and respond accordingly. An example illustrating why machines need to be able to gauge a situation, involves that of a kitchen aid robot instructed to lift and throw a knife into the washer with a human chef standing in its way with their back turned. The preferred and appropriate response to this would be for the robot to stop moving or abandon the knife.
With the advanced algorithms in place, the robots created by the team at the Tuffs University can now ask themselves questions like “How do I perform a certain task”, “ Am I physically able to do it?” or “Will it cause harm to me or others?” before responding to the request.
The Androids were designed to follow simple voice commands such as “stand up” or “sit down”, but when instructed to “walk forward” off the end of a table, these intelligent machines politely declined with responses such as “Sorry, I can’t do this as there is no support ahead.” In a demonstration video, a robot is finally persuaded to walk off the table by a researcher who assures the artificially intelligent creature he will catch it as it falls.
While some might regard teaching robots to say no as playing with fire, mastering this ability is critical to the development of a machine’s artificial reasoning skills. Given most robots are designed to serve humans, they will eventually have to acquire the knowledge of how to deal with the often contradictory human mind.
Despite great advances in this field of research there is still much work to be done before robots will be able to truly understand humans, a notion that the researchers involved in the project openly admit themselves. As our robot companions develop to become more and more advanced, scientists will also have to outline specific behavioral rules for these entities, probably akin to Asimov’s rules of robotics. In the future, robots might not only have to make decisions on whether to preserve their own safety or that of their human’s, but also be able to answer morally ambiguous questions such as “Should I commit fraud”?