Bad timing can botch a repetitive task that requires interaction between a robot and a human, a new study reports. Human participants in the study were less efficient at completing a pick-and-place task when their robot collaborators sent a signal to participants who were about to pick up an object, but fared much better when the robot provided the same cue shortly after participants had picked up an object and were about to put it down.
These results underline the benefits of designing human-robot interactions based on neuroscientific principles of human perception and motor control. Before social robots can help humans in household, industry, or healthcare settings – three big areas for robot companions and collaborators – they need to learn to interact seamlessly with humans. Social science teaches that expressing intent is crucial for interactions, but the timing of these expressions have not been well-studied in robot-human interactions.
To address this, Francesca Cini and colleagues tasked 17 human participants to move four fragile objects from one side of a table to the other as quickly as possible without breaking the objects. A robot partner in the same room sent vibrational cues to the participants while they completed their tasks, cueing the participant to finish their current task, take a different object from the robot, and position it on the same table. When the robot provided this cue at the beginning of the person’s grasp, this severely interfered with the task, slowing participants’ arm movements, increasing the number of errors, and distracting their focus as they averted their gaze to stare at the robot more often than needed.
The robot was far less disruptive when it sent the vibration shortly after the participants picked up an object. In future work, the authors aim to experiment with different kinds of sensory cues – visual or auditory, for example – and their impact on human performance.