Fatih Sivridag and Nivedita Mani

Children’s word learning from socially contingent robots under active vs. passive learning conditions

HRI '24: Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction

Language is learned through social interactions, in which gaze has a special role because it can be used to guide the attention and reference objects easily. Children, starting from very early ages, are also very good at utilizing gaze to map labels to referenced objects. To achieve language teaching robots, we need to understand how these functions of gaze can be implemented most efficiently. To this aim, we allowed children to interact with a social robot to learn the labels of several objects in a naturalistic setting. In some trials the child guided the gaze and chose the object to be learned while the robot was following and in the others they changed the roles and robot guided the gaze and decided on the object to be learned. We measured how much children actually followed the robot’s gaze and how many words they learned in these two conditions, referred to as active and passive learning conditions, respectively. The results indicate that although children followed the robot’s gaze and learned words successfully, there were no meaningful differences inword learning between the two conditions. The rate of gaze following and time spent looking at the robot did not influence word learning, either. The implications of these results for use of robots in educational settings are further discussed.