Children's trust in robots
Inspired by my attendance at a conference in Toruń, I talk a bit about the research my collaborators and I have done in recent years about children's trust in robots
AK
8/22/20253 min read


This year is my well-earned sabbatical, which means attending conferences and research stays! One of these was the Perspectives on Child Development: Social Learning and Interaction conference in Toruń, Poland, at Nicolaus Copernicus University, organized by Arkadiusz Gut (on the right) and Marta Białecka (on the left) in June. I was excited that Paul Harris (second from the right) gave a keynote speech, as one of our studies used a paradigm he developed - you can find our paper "When is it right for a robot to be wrong?" Children trust a robot over a human in a selective trust task here.
Summary
Our study explored how children aged 3-6 years perceive and trust social robots compared to humans, particularly when faced with conflicting information. The research employed an online selective trust task where children observed a human and a robot labeling familiar and novel objects, with agents exhibiting varying levels of reliability. Findings indicate that children generally prefer to endorse labels from reliable agents, whether human or robot, challenging the initial hypothesis of a human bias. Interestingly, when both agents were equally reliable, children displayed a robot bias, favoring the robot for social interactions and information seeking. The study also suggests that children differentiate between human and robot errors, often perceiving human mistakes as intentional but not robot mistakes, which may not negatively impact their social evaluations of robots. These results highlight the complex interplay between reliability, social desirability, and perceived agency in children's trust in robots, suggesting a unique potential for robots in educational settings.
Understanding the Selective Trust Task
The selective trust paradigm, also known as epistemic trust, conflicting sources, or learning from testimony, was developed to address how children choose whom to learn from when faced with conflicting testimonies. Children are naturally primed to learn from other humans, and their learning relies not only on information accuracy but also on social cues surrounding that information. However, children cannot simply accept all information and must learn to discriminate between different sources, especially when information conflicts.


Stower, R., Kappas, A., & Sommer, K. (2024). When is it right for a robot to be wrong? Children trust a robot over a human in a selective trust task. Computers in Human Behavior, 157, 108229, https://doi.org/10.1016/j.chb.2024.108229
Stower, R., Calvo-Barajas, N., Castellano, G. et al. A Meta-analysis on Children’s Trust in Social Robots. Int J of Soc Robotics 13, 1979–2001 (2021). https://doi.org/10.1007/s12369-020-00736-8
Finally, here is the abstract of the presentation at the conference in Torun
Why Do Children Trust Robots? Rethinking Minds, Machines, and Playful Sociality
Arvid Kappas and Rebecca Stower
Understanding when and why children trust artificial agents is a question with both theoretical and practical significance. For developmental scientists, robots and virtual agents offer unique experimental advantages: They allow greater control over behavior than human interactants, making them powerful tools for probing how children represent others. For designers and educators, the same question carries pragmatic weight—how do we build systems that children perceive as trustworthy, especially in learning contexts?
In this talk, we survey existing research and present findings suggesting that robots may not simply be treated as human stand-ins. Children might engage with them differently—not just because of how they act, but because robots are often perceived as cooler, more fun, or more interesting than people. This raises both opportunities and challenges. On one hand, these agents can be designed to maximize engagement and trust; on the other, they might not reflect the same mental models children use for human others.
We invite discussion on what this means for using robots as tools in cognitive research and as partners in educational environments. Are robots truly proxies for people—or something fundamentally different?


