Feed aggregator

Integrating cultural responsiveness into the educational setting is essential to the success of multilingual students. As social robots present the potential to support multilingual children, it is imperative that the design of social robot embodiments and interactions are culturally responsive. This paper summarizes the current literature on educational robots in culturally diverse settings. We argue the use of the Culturally Localized User Experience (CLUE) Framework is essential to ensure cultural responsiveness in HRI design. We present three case studies illustrating the CLUE framework as a social robot design approach. The results of these studies suggest co-design provides multicultural learners an accessible, nonverbal context through which to provide design requirements and preferences. Furthermore, we demonstrate the importance of key stakeholders (students, parents, and teachers) as essential to ensure a culturally responsive robot. Finally, we reflect on our own work with culturally and linguistically diverse learners and propose three guiding principles for successfully engaging diverse learners as valuable cultural informants to ensure the future success of educational robots.

When exploring the surrounding environment with the eyes, humans and primates need to interpret three-dimensional (3D) shapes in a fast and invariant way, exploiting a highly variant and gaze-dependent visual information. Since they have front-facing eyes, binocular disparity is a prominent cue for depth perception. Specifically, it serves as computational substrate for two ground mechanisms of binocular active vision: stereopsis and binocular coordination. To this aim, disparity information, which is expressed in a retinotopic reference frame, is combined along the visual cortical pathways with gaze information and transformed in a head-centric reference frame. Despite the importance of this mechanism, the underlying neural substrates still remain widely unknown. In this work, we investigate the capabilities of the human visual system to interpret the 3D scene exploiting disparity and gaze information. In a psychophysical experiment, human subjects were asked to judge the depth orientation of a planar surface either while fixating a target point or while freely exploring the surface. Moreover, we used the same stimuli to train a recurrent neural network to exploit the responses of a modelled population of cortical (V1) cells to interpret the 3D scene layout. The results for both human performance and from the model network show that integrating disparity information across gaze directions is crucial for a reliable and invariant interpretation of the 3D geometry of the scene.

Social robots have grown increasingly integrated into our daily lives in recent years. Robots can be good social agents who engage with people, such as assistants and counselors, and good partners and companions with whom people can form good relationships. Furthermore, unlike devices such as smart speakers or virtual agents on a screen, robots have physicality, which allows them to observe the actual environment using sensors and respond behaviorally with full-body motions. In order to engage people in dialogue and create good relationships with robots as close partners, real-time interaction is important. In this article, we present a dialogue system platform developed with the aim of providing robots with social skills. We also built a system architecture for the robot to respond with speech and gestures within the dialogue system platform, which attempts to enable natural engagement with the robot and takes advantage of its physicality. In addition, we think the process called “co-creation” is important to build a good human–robot interaction system. Engineers must bridge the gap between users and robots in order for them to interact more effectively and naturally, not only by building systems unilaterally but also from a range of views based on the opinions of real users. We reported two experiments using the developed dialogue interaction system with a robot. One is an experiment with elderly people as the initial phase in this co-creation process. The second experiment was conducted with a wide range of ages, from children to adults. Through these experiments, we can obtain a lot of useful insights for improving the system. We think that repeating this co-creation process is a useful approach toward our goal that humans and robots can communicate in a natural way as good partners such as family and friends.



The best professional football goalkeepers in the English Premiere League (we’re talking about the sport called soccer in North America) are able to save almost, but not quite, 80 percent of shots taken on goal. This is very good. But it’s not nearly as good as the 87.5 percent of shots that a 9kg quadrupedal robot can block: in its tiny goal, and versus tiny children taking tiny shots, Mini Cheetah turns out to be an excellent goalie.

What’s the point of this? Well, it’s fun! Also, this is a challenging problem, because it involves highly dynamic locomotion with object manipulation—specifically, manipulating a fast-moving ball in any direction except for into the goal. Teaching the robot to move its body dynamically while also making sure that its feet (or face) end up where they need to be in time to block the ball is basically two problems combined into one. The trick here is combining the right locomotion controller with a planner for the end-effector trajectory that can find the best way to get Mini Cheetah in front of the ball for the save—all in the less than a second that it takes for the ball to travel to the goal.

The approach to solving this was to train Mini Cheetah on a set of useful goalkeeping skills: sidestepping for intercepts near the robot and close to the ground, diving to reach the lower corners of the goal, and jumping to cover the top of the goal and the upper corners. The idea (or hope?) is that all of these skills are recoverable, and that the robot will end up making a safe landing on its feet afterwards. But as with human goalies, that’s a secondary concern behind making a successful save. A reference motion for each skill is manually programmed in, and then the system is trained up in simulation before being transferred directly to the robot. Intercepting the ball involves the system choosing which skill will get a piece of the robot to intersect the ball’s trajectory in the most stable and energy-efficient way/

The goal that Mini Cheetah is defending is 1.5m wide and 0.9m high, and the ball (size 3) is kicked from about 4m away. The ball is tracked externally. The robot’s performance here is pretty impressive for such a little robot, but we should keep it in context:

We show that our system can be used to directly transfer dynamic maneuvers and goalkeeping skills learned in simulation to a real quadrupedal robot, with an 87.5 [percent] successful interception rate of random shots in the real world. We note that human soccer goalkeepers average around a 69 [percent] save rate. Although this is against professional players shooting towards regulation sized goals, we hope this paper takes us one step closer to enabling robotic soccer players to compete with humans in the near future.

If you think about it, the sport of football is basically a bunch of discrete skills that can be chained together around the trajectory of a ball in support of a high-level goal. And the researchers say that “the proposed framework can be extended to other scenarios, such as multi-skill soccer ball kicking.” This group has already done some early work on shooting, and it’ll be fun to see what they come up with next.

And also, watch your back, English Premiere League goalies: Mini Cheetah is coming for you.

Creating a Dynamic Quadrupedal Robotic Goalkeeper with Reinforcement Learning, by Xiaoyu Huang, Zhongyu Li, Yanzhen Xiang, Yiming Ni, Yufeng Chi, Yunhao Li, Lizhi Yang, Xue Bin Peng, and Koushil Sreenath from UC Berkeley's Hybrid Robotics Lab, is available on arXiv.


The best professional football goalkeepers in the English Premiere League (we’re talking about the sport called soccer in North America) are able to save almost, but not quite, 80 percent of shots taken on goal. This is very good. But it’s not nearly as good as the 87.5 percent of shots that a 9kg quadrupedal robot can block: in its tiny goal, and versus tiny children taking tiny shots, Mini Cheetah turns out to be an excellent goalie.

What’s the point of this? Well, it’s fun! Also, this is a challenging problem, because it involves highly dynamic locomotion with object manipulation—specifically, manipulating a fast-moving ball in any direction except for into the goal. Teaching the robot to move its body dynamically while also making sure that its feet (or face) end up where they need to be in time to block the ball is basically two problems combined into one. The trick here is combining the right locomotion controller with a planner for the end-effector trajectory that can find the best way to get Mini Cheetah in front of the ball for the save—all in the less than a second that it takes for the ball to travel to the goal.

The approach to solving this was to train Mini Cheetah on a set of useful goalkeeping skills: sidestepping for intercepts near the robot and close to the ground, diving to reach the lower corners of the goal, and jumping to cover the top of the goal and the upper corners. The idea (or hope?) is that all of these skills are recoverable, and that the robot will end up making a safe landing on its feet afterwards. But as with human goalies, that’s a secondary concern behind making a successful save. A reference motion for each skill is manually programmed in, and then the system is trained up in simulation before being transferred directly to the robot. Intercepting the ball involves the system choosing which skill will get a piece of the robot to intersect the ball’s trajectory in the most stable and energy-efficient way/

The goal that Mini Cheetah is defending is 1.5m wide and 0.9m high, and the ball (size 3) is kicked from about 4m away. The ball is tracked externally. The robot’s performance here is pretty impressive for such a little robot, but we should keep it in context:

We show that our system can be used to directly transfer dynamic maneuvers and goalkeeping skills learned in simulation to a real quadrupedal robot, with an 87.5 [percent] successful interception rate of random shots in the real world. We note that human soccer goalkeepers average around a 69 [percent] save rate. Although this is against professional players shooting towards regulation sized goals, we hope this paper takes us one step closer to enabling robotic soccer players to compete with humans in the near future.

If you think about it, the sport of football is basically a bunch of discrete skills that can be chained together around the trajectory of a ball in support of a high-level goal. And the researchers say that “the proposed framework can be extended to other scenarios, such as multi-skill soccer ball kicking.” This group has already done some early work on shooting, and it’ll be fun to see what they come up with next.

And also, watch your back, English Premiere League goalies: Mini Cheetah is coming for you.

Creating a Dynamic Quadrupedal Robotic Goalkeeper with Reinforcement Learning, by Xiaoyu Huang, Zhongyu Li, Yanzhen Xiang, Yiming Ni, Yufeng Chi, Yunhao Li, Lizhi Yang, Xue Bin Peng, and Koushil Sreenath from UC Berkeley's Hybrid Robotics Lab, is available on arXiv.

Pages