In an era where children are increasingly exposed to vast amounts of online information, some of it unverified and produced by AI and non-human sources, understanding the development of trust becomes crucial. A recent study published in the journal Child Development titled ‘Younger, not older, children trust an inaccurate human informant more than an inaccurate robot informant’ delves into how children aged three to five years discern and place their trust in information sources.
The research, conducted by Li Xiaoqian and Professor Yow Wei Quin from the Singapore University of Technology and Design (SUTD), explores the basis upon which young children decide whom to trust when learning new information. According to Li Xiaoqian, “Children do not just trust anyone to teach them labels; they trust those who were reliable in the past.” This selectivity, she explains, indicates an emerging understanding in young children of what constitutes a reliable source of information.
In the study, preschoolers from various Singapore institutions were divided into ‘younger’ and ‘older’ cohorts, based on the median age of 4.58 years. These children were then introduced to either a human or robot informant, personified by SoftBank Robotics’ humanoid social robot NAO, which provided them with accurate or inaccurate labels for objects. The children’s trust was gauged by their willingness to accept new information from these informants.
The findings revealed that both younger and older children were open to learning from informants who had previously provided accurate information. However, when faced with an unreliable informant, younger children were more likely to trust a human over a robot, whereas older children displayed a general distrust towards unreliable sources, irrespective of whether they were human or robotic.
Dr. Li highlights that these results indicate a shift in selective trust strategies as children age. While younger children may rely more on identity cues, older children seem to place more emphasis on the reliability of the information itself, regardless of the source. This shift from ‘who you are’ to ‘what you know’ marks a significant developmental transition in how children assess trustworthiness.
This study is pioneering in directly comparing children’s trust in humans versus robots and understanding the nuances of trust development. As Prof Yow notes, these insights are particularly pertinent given the increasing integration of robots and AI-driven tools in educational settings. As children’s exposure to and interactions with these technologies grow, their perceptions of these non-human sources as reliable and intelligent might shift.
The implications for educational design are significant. Prof Yow stresses the importance of considering perceived competence when creating robots and AI tools for young learners. Recognizing how children’s trust evolves can guide the creation of more effective and developmentally appropriate learning environments.
Photo: A child watching a robot provide accurate or inaccurate information. Credit: SUTD