Skip to content
Natural Environment Research Council
Grants on the Web - Return to homepage Logo

Details of Award

NERC Reference : NE/T014636/1

Improving Human Robot Collaboration Using Intention Signalling

Grant Award

Principal Investigator:
Professor ES Cross, University of Glasgow, School of Psychology
Science Area:
None
Overall Classification:
Unknown
ENRIs:
None
Science Topics:
Robotics & Autonomy
Personal Robotics
Human-Computer Interactions
Experimental Psychology
Abstract:
[Full referenced proposal in 'Proposal' document] Physical human-robot collaboration refers to the act of a human and a robot working together to achieve a common goal. Collaboration between humans and robots is generally associated with the manufacturing industry, however there is also much interest from those working in other sectors too. These include (but are not limited to) those hoping to improve the efficiency of rescue missions, space travel, and healthcare. For example, snake-like robots have been designed to help quickly find people trapped under rubble, and robotic arms have been developed to assist during delicate surgeries. Despite large developments in the functionality of robots, studies have indicated flaws in these collaborative systems. One prominent issue is that although machines can classify objects and events in its surroundings, the underlying processes can be incorrect. For example: a machine may be able to correctly classify images of dogs vs wolves - but it could be the case that it is classifying based on the snow in the background, and we wouldn't find out until a mistake was made. In the case of the machine sorting images of dogs vs wolves the cost of a mistake is low, but in human-robot collaboration the cost of such a classification error could lead to lost resources (time, effort, and money), or even injury to the end-user . To improve the efficiency of human-robot collaborations, and avoid costly errors, it has been suggested that there must be collaboration between those working to design and build the systems, and those with expertise in social sciences. Specifically, due to the complexity of the data and the intricacies of human behaviour, it has been suggested that there should be collaboration between experts in Cognitive Science, Psychology, and Robotics. This proposal stems from such a combination - a collaboration between 1) the Social Brain in Action Laboratory (a UK laboratory specialising in Psychology and Cognitive Neuroscience), and 2) the Rethinking Interaction, Collaboration and Engagement Laboratory (a Canadian Laboratory specialising in Computer Science and Robot Design). Despite much effort to derive systems which clearly explain the rationale for their decisions, it is proving extremely challenging and time-consuming (Gilpin, 2018; Rudin, 2019). We propose an alternative solution which, despite the possibility of incorrect classifications, could stop errors from errors occuring. Specifically, we argue that it is the action of the robot which leads to the error, not the decision itself. As a result, we propose that before acting, the robot should indicate to the user what it is going to do. Upon seeing this cue the user can allow the movement to continue, or halt the machine if a mistake is about to be made. In addition to increasing the efficiency of the collaboration, we anticipate that trust towards the robot, and comfortability in the interaction, will also be improved by adding 'intention signalling'. There is a wealth of literature from both psychological sciences and robotics which suggests that teams work more effectively when intentions of all members are known, so we are confident that adding this feature will increase the efficiency of the interaction. To investigate whether robot 'intention signalling' increases task efficiency, improves user perceptions of the robot (e.g. liking, trust...), and improves user perceptions of the collaboration (e.g. enjoyment, comfortability), we intend to conduct an experiment. In the experiment we propose to conduct a table-top experiment in which participants will work with a robotic arm (either signalling or acting silently) to complete a puzzle. The technology at the University of Toronto has the capability to do numerous signals, so the research intend to pool their knowledge and decide which signal should be used in this experiment - e.g. a gaze cue, a gesture, or perhaps a visualisation on a digital tabletop.
Period of Award:
1 Apr 2020 - 8 Apr 2021
Value:
£5,320
Authorised funds only
NERC Reference:
NE/T014636/1
Grant Stage:
Completed
Scheme:
NC&C NR1
Grant Status:
Closed

This grant award has a total value of £5,320  

top of page


FDAB - Financial Details (Award breakdown by headings)

DI - Other CostsException - Other CostsDI - T&S
£2,930£1,898£493

If you need further help, please read the user guide.