Feed aggregator

Telerobotics aims to transfer human manipulation skills and dexterity over an arbitrary distance and at an arbitrary scale to a remote workplace. A telerobotic system that is transparent enables a natural and intuitive interaction. We postulate that embodiment (with three sub-components: sense of ownership, agency, and self-location) of the robotic system leads to optimal perceptual transparency and increases task performance. However, this has not yet been investigated directly. We reason along four premises and present findings from the literature that substantiate each of them: (1) the brain can embody non-bodily objects (e.g., robotic hands), (2) embodiment can be elicited with mediated sensorimotor interaction, (3) embodiment is robust against inconsistencies between the robotic system and the operator's body, and (4) embodiment positively correlates to dexterous task performance. We use the predictive encoding theory as a framework to interpret and discuss the results reported in the literature. Numerous previous studies have shown that it is possible to induce embodiment over a wide range of virtual and real extracorporeal objects (including artificial limbs, avatars, and android robots) through mediated sensorimotor interaction. Also, embodiment can occur for non-human morphologies including for elongated arms and a tail. In accordance with the predictive encoding theory, none of the sensory modalities is critical in establishing ownership, and discrepancies in multisensory signals do not necessarily lead to loss of embodiment. However, large discrepancies in terms of multisensory synchrony or visual likeness can prohibit embodiment from occurring. The literature provides less extensive support for the link between embodiment and (dexterous) task performance. However, data gathered with prosthetic hands do indicate a positive correlation. We conclude that all four premises are supported by direct or indirect evidence in the literature, suggesting that embodiment of a remote manipulator may improve dexterous performance in telerobotics. This warrants further implementation testing of embodiment in telerobotics. We formulate a first set of guidelines to apply embodiment in telerobotics and identify some important research topics.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA HRI 2020 – March 23-26, 2020 – Cambridge, U.K. ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France ICUAS 2020 – June 9-12, 2020 – Athens, Greece CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

Since Honda decided to stop further development of the beloved robot Asimo, attention has turned to other companies building advanced humanoids. One of them is UBTECH, which appears to be making steady progress with its Walker robot. At CES early this year, the company showed Walker pushing a cart, pouring a drink, standing on one foot, and even bending its body backward like a yogi.

We had such an amazing time at CES 2020 showing you the major upgrades we’ve made to Walker. With improved flexibility, stability, precision, and speed, Walker has come a long way since its initial debut at CES a few years back.

Walker is an intelligent Humanoid Service Robot designed with outstanding hardware, excellent motion ability and AI interactive performance – the most advanced robot UBTECH has ever created.

But UBTECH wasn’t done. It also demoed its service robot Cruzr and indoor inspection robot AIMBOT.

Cruzr, UBTECH’s enterprise service robot, was on full display at CES 2020!

Cruzr is a cloud-based intelligent humanoid robot that provides a new generation of service for a variety of industrial applications. Cruzr helps enhance and personalize the guest experience in consumer facing establishments such as retail, financial institutions, and hospitality.

AT CES 2020, we showcased AIMBOT, an autonomous indoor monitoring robot. AIMBOT is used for intelligent and accurate indoor inspection, efficient inventory management, visitor verification, preventing safety hazards and more.

UBTECH ]

Generating complex movements in redundant robots like humanoids is usually done by means of multi-task controllers based on quadratic programming, where a multitude of tasks is organized according to strict or soft priorities.

Time-consuming tuning and expertise are required to choose suitable task priorities, and to optimize their gains.

Here, we automatically learn the controller configuration (soft and strict task priorities and Convergence Gains), looking for solutions that track a variety of desired task trajectories efficiently while preserving the robot’s balance.

We use multi-objective optimization to compare and choose among Pareto-optimal solutions that represent a trade-off of performance and robustness and can be transferred onto the real robot.

We experimentally validate our method by learning a control configuration for the iCub humanoid, to perform different whole-body tasks, such as picking up objects, reaching and opening doors.

[ Larsen/Inria ]

This week, roboticist and comedian Naomi Fitter wrote a fantastic guest post on her experiences with robot comedy. Here’s one of the performances she’s created, with her Nao humanoid talking and singing with comedian Sarah Hagen.

Sketch comedy duo act including the talented human/comedian Sarah Hagen and the Oregon State University SHARE Lab’s illustrious NAO robot.

[ Naomi Fitter ]

This work is part of Tim Hojnik’s PhD project, a partnership between CSIRO’s Data61 Robotics and Autonomous Systems Group and the Queensland University of Technology.

[ CSIRO ]

Who’s ready for Superbowl LIV!? The Gripper Guys are.

[ Soft Robotics ]

Researchers at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany, have designed and fabricated an untethered microrobot that can slip along either a flat or curved surface in a liquid when exposed to ultrasound waves. Its propulsion force is two to three orders of magnitude stronger than the propulsion force of natural microorganisms such as bacteria or algae. Additionally, it can transport cargo while swimming. The acoustically propelled robot hence has significant potential to revolutionize the future minimally invasive treatment of patients.

[ Max Planck Institute for Intelligent Systems ]

Did you know Kuka have a giant linear robot? Now you do!

The three-axis linear robot KR 80L has Cartesian axes which are operated via the robot controller. The development of the new KR 80L benefited greatly from KUKA experience gained from many handling applications and our expertise as one of the leading suppliers of intelligent automation solutions.

The modular design allows workspaces from 0.75m³ up to 225m³ to be implemented, making the KUKA linear robot a safe investment for your automation. Minimal interference contours below the robot mean that it is ideally suited for linking work processes by carrying out loading and unloading, palletizing, handling or transfer tasks, for example. The use of proven, series-produced robotic drive components ensures utmost performance and reliability.

[ Kuka ]

Apparently Promobot brought one of its humanoids to New York City’s Bryant Park to help screen people for the coronavirus. NYC officers promptly ejected the robot from the park for lacking a permit, but not before a little robot dance party. 

[ Promobot ] via [ NY Post ]

LOVOT, which we’ve featured on our Robot Gift Guide, is very cute—at least when it has its furry skin on.

Unfortunately we don’t speak Japanese to understand the full presentation, but we applaud the fact that the company is willing to discuss—and show—what’s inside the robot. Given the high rate of consumer robot failures, more sharing and transparency could really help the industry.

[ Robot Start ]

Drones have the potential to change the African continent by revolutionizing the way deliveries are made, blood samples are processed, farmers grow their crops and more. To tackle the many challenges faced by Africa, the World Bank and partners convened the African Drone Forum in Kigali, Rwanda, from February 5-7, 2020. To welcome the audience of engineers, scientists, entrepreneurs, development experts and regulators, the World Bank and ADF team created this video.

[ African Drone Forum ]

We continue to scale our fully driverless experience -- with no one behind the wheel -- for our early riders in Metro Phoenix. We invited Arizona football legend Larry Fitzgerald to take a ride with our Waymo Driver. Watch all of Larry’s reactions in this video of his full, unedited ride.

[ Waymo ]

The humanoid Robot ARMAR-6 grasps unknown objects in a cluttered box autonomously.

[ H2T KIT ]

Quanser R&D engineers have been testing different bumper designs and materials to protect the QCar in collisions. This is a scale-speed equivalent of 120km/hr!

[ Quanser ]

Drone sales have exploded in the past few years, filling the air with millions of new aircraft. Simple modifications to these drones by criminals and terrorists have left people, privacy and physical and intellectual property totally exposed.

Fortem Technologies innovates to stay ahead of the threat, keeping pace with escalating drone threats worldwide.

With more than 3,650 captures at various attack vectors and speeds, DroneHunter is the leading, world-class interceptor drone.

[ Fortem Technologies ] via [ Engadget ]

This is an interesting application of collaborative robots at this car bumper manufacturer, where they mounted industrial cameras on FANUC cobots to perform visual quality-control checks. These visual inspections happen throughout the assembly line, with the robots operating right next to the human workers.

Discovering the many benefits a FANUC collaborative robot solution can provide.

Flex-N-Gate, a supplier of bumpers, exterior trim, lighting, chassis assemblies and other automotive products, uses inspection systems at their Ventra Ionia, Michigan plant to ensure product quality.

To help improve these processes, reduce costs and save floor space, Flex-N-Gate turned to FANUC for a collaborative robot solution, leveraging FANUC America’s 24/7/365 service network to support their cobot systems for a completely successful integration.

[ FANUC ]

In this video we present results on autonomous subterranean exploration inside an abandoned underground mine using the ANYmal legged robot. ANYmal is utilizing the proposed Graph-based Exploration Path Planner which ensures the efficient exploration of the complex underground environment, while simultaneously avoiding obstacles and respecting traversability constraints.

The designed planner first operates by engaging its local exploration mode with which guides the robot to explore along a mine corridor. When the system reaches a local dead-end, the global planning layer of the method is engaged and provides a new path to guide the robot towards a selected frontier of the explored space. The robot is thus re-positioned to this frontier and upon arrival the local planning mode is enabled again in order to enable the continuation of the exploration mission. Finally, provided a time budget for the mission, the global planner identifies the point that the robot must be commanded to return-to-home and provides an associated reference path. The presented mission is completely autonomous.

[ Robotic Systems Lab ]

Do all Roborock vacuums rock? Vacuum vlog Vacuum Wars did some extensive vacuuming tests to find out.

After testing and reviewing all of the robot vacuums Roborock has released so far, I think its time for me to do a big comparison video showing the differences their various models as well as choosing my favorite Roborock models in 3 different categories.

[ Vacuum Wars ]

Highlights from Lex Fridman’s interview with Jim Keller on Tesla, Elon Musk, Autopilot, and more.

Jim Keller is a legendary microprocessor engineer, having worked at AMD, Apple, Tesla, and now Intel. He’s known for his work on the AMD K7, K8, K12 and Zen microarchitectures, Apple A4, A5 processors, and co-author of the specifications for the x86-64 instruction set and HyperTransport interconnect.

[ Lex Fridman ]

Take a trip down the microworld as roboticists Paul McEuen and Marc Miskin explain how they design and mass-produce microrobots the size of a single cell, powered by atomically thin legs -- and show how these machines could one day be "piloted" to battle crop diseases or study your brain at the level of individual neurons.

[ TED Talks ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA HRI 2020 – March 23-26, 2020 – Cambridge, U.K. ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France ICUAS 2020 – June 9-12, 2020 – Athens, Greece CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

Since Honda decided to stop further development of the beloved robot Asimo, attention has turned to other companies building advanced humanoids. One of them is UBTECH, which appears to be making steady progress with its Walker robot. At CES early this year, the company showed Walker pushing a cart, pouring a drink, standing on one foot, and even bending its body backward like a yogi.

We had such an amazing time at CES 2020 showing you the major upgrades we’ve made to Walker. With improved flexibility, stability, precision, and speed, Walker has come a long way since its initial debut at CES a few years back.

Walker is an intelligent Humanoid Service Robot designed with outstanding hardware, excellent motion ability and AI interactive performance – the most advanced robot UBTECH has ever created.

But UBTECH wasn’t done. It also demoed its service robot Cruzr and indoor inspection robot AIMBOT.

Cruzr, UBTECH’s enterprise service robot, was on full display at CES 2020!

Cruzr is a cloud-based intelligent humanoid robot that provides a new generation of service for a variety of industrial applications. Cruzr helps enhance and personalize the guest experience in consumer facing establishments such as retail, financial institutions, and hospitality.

AT CES 2020, we showcased AIMBOT, an autonomous indoor monitoring robot. AIMBOT is used for intelligent and accurate indoor inspection, efficient inventory management, visitor verification, preventing safety hazards and more.

UBTECH ]

Generating complex movements in redundant robots like humanoids is usually done by means of multi-task controllers based on quadratic programming, where a multitude of tasks is organized according to strict or soft priorities.

Time-consuming tuning and expertise are required to choose suitable task priorities, and to optimize their gains.

Here, we automatically learn the controller configuration (soft and strict task priorities and Convergence Gains), looking for solutions that track a variety of desired task trajectories efficiently while preserving the robot’s balance.

We use multi-objective optimization to compare and choose among Pareto-optimal solutions that represent a trade-off of performance and robustness and can be transferred onto the real robot.

We experimentally validate our method by learning a control configuration for the iCub humanoid, to perform different whole-body tasks, such as picking up objects, reaching and opening doors.

[ Larsen/Inria ]

This week, roboticist and comedian Naomi Fitter wrote a fantastic guest post on her experiences with robot comedy. Here’s one of the performances she’s created, with her Nao humanoid talking and singing with comedian Sarah Hagen.

Sketch comedy duo act including the talented human/comedian Sarah Hagen and the Oregon State University SHARE Lab’s illustrious NAO robot.

[ Naomi Fitter ]

This work is part of Tim Hojnik’s PhD project, a partnership between CSIRO’s Data61 Robotics and Autonomous Systems Group and the Queensland University of Technology.

[ CSIRO ]

Who’s ready for Superbowl LIV!? The Gripper Guys are.

[ Soft Robotics ]

Researchers at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany, have designed and fabricated an untethered microrobot that can slip along either a flat or curved surface in a liquid when exposed to ultrasound waves. Its propulsion force is two to three orders of magnitude stronger than the propulsion force of natural microorganisms such as bacteria or algae. Additionally, it can transport cargo while swimming. The acoustically propelled robot hence has significant potential to revolutionize the future minimally invasive treatment of patients.

[ Max Planck Institute for Intelligent Systems ]

Did you know Kuka have a giant linear robot? Now you do!

The three-axis linear robot KR 80L has Cartesian axes which are operated via the robot controller. The development of the new KR 80L benefited greatly from KUKA experience gained from many handling applications and our expertise as one of the leading suppliers of intelligent automation solutions.

The modular design allows workspaces from 0.75m³ up to 225m³ to be implemented, making the KUKA linear robot a safe investment for your automation. Minimal interference contours below the robot mean that it is ideally suited for linking work processes by carrying out loading and unloading, palletizing, handling or transfer tasks, for example. The use of proven, series-produced robotic drive components ensures utmost performance and reliability.

[ Kuka ]

Apparently Promobot brought one of its humanoids to New York City’s Bryant Park to help screen people for the coronavirus. NYC officers promptly ejected the robot from the park for lacking a permit, but not before a little robot dance party. 

[ Promobot ] via [ NY Post ]

LOVOT, which we’ve featured on our Robot Gift Guide, is very cute—at least when it has its furry skin on.

Unfortunately we don’t speak Japanese to understand the full presentation, but we applaud the fact that the company is willing to discuss—and show—what’s inside the robot. Given the high rate of consumer robot failures, more sharing and transparency could really help the industry.

[ Robot Start ]

Drones have the potential to change the African continent by revolutionizing the way deliveries are made, blood samples are processed, farmers grow their crops and more. To tackle the many challenges faced by Africa, the World Bank and partners convened the African Drone Forum in Kigali, Rwanda, from February 5-7, 2020. To welcome the audience of engineers, scientists, entrepreneurs, development experts and regulators, the World Bank and ADF team created this video.

[ African Drone Forum ]

We continue to scale our fully driverless experience -- with no one behind the wheel -- for our early riders in Metro Phoenix. We invited Arizona football legend Larry Fitzgerald to take a ride with our Waymo Driver. Watch all of Larry’s reactions in this video of his full, unedited ride.

[ Waymo ]

The humanoid Robot ARMAR-6 grasps unknown objects in a cluttered box autonomously.

[ H2T KIT ]

Quanser R&D engineers have been testing different bumper designs and materials to protect the QCar in collisions. This is a scale-speed equivalent of 120km/hr!

[ Quanser ]

Drone sales have exploded in the past few years, filling the air with millions of new aircraft. Simple modifications to these drones by criminals and terrorists have left people, privacy and physical and intellectual property totally exposed.

Fortem Technologies innovates to stay ahead of the threat, keeping pace with escalating drone threats worldwide.

With more than 3,650 captures at various attack vectors and speeds, DroneHunter is the leading, world-class interceptor drone.

[ Fortem Technologies ] via [ Engadget ]

This is an interesting application of collaborative robots at this car bumper manufacturer, where they mounted industrial cameras on FANUC cobots to perform visual quality-control checks. These visual inspections happen throughout the assembly line, with the robots operating right next to the human workers.

Discovering the many benefits a FANUC collaborative robot solution can provide.

Flex-N-Gate, a supplier of bumpers, exterior trim, lighting, chassis assemblies and other automotive products, uses inspection systems at their Ventra Ionia, Michigan plant to ensure product quality.

To help improve these processes, reduce costs and save floor space, Flex-N-Gate turned to FANUC for a collaborative robot solution, leveraging FANUC America’s 24/7/365 service network to support their cobot systems for a completely successful integration.

[ FANUC ]

In this video we present results on autonomous subterranean exploration inside an abandoned underground mine using the ANYmal legged robot. ANYmal is utilizing the proposed Graph-based Exploration Path Planner which ensures the efficient exploration of the complex underground environment, while simultaneously avoiding obstacles and respecting traversability constraints.

The designed planner first operates by engaging its local exploration mode with which guides the robot to explore along a mine corridor. When the system reaches a local dead-end, the global planning layer of the method is engaged and provides a new path to guide the robot towards a selected frontier of the explored space. The robot is thus re-positioned to this frontier and upon arrival the local planning mode is enabled again in order to enable the continuation of the exploration mission. Finally, provided a time budget for the mission, the global planner identifies the point that the robot must be commanded to return-to-home and provides an associated reference path. The presented mission is completely autonomous.

[ Robotic Systems Lab ]

Do all Roborock vacuums rock? Vacuum vlog Vacuum Wars did some extensive vacuuming tests to find out.

After testing and reviewing all of the robot vacuums Roborock has released so far, I think its time for me to do a big comparison video showing the differences their various models as well as choosing my favorite Roborock models in 3 different categories.

[ Vacuum Wars ]

Highlights from Lex Fridman’s interview with Jim Keller on Tesla, Elon Musk, Autopilot, and more.

Jim Keller is a legendary microprocessor engineer, having worked at AMD, Apple, Tesla, and now Intel. He’s known for his work on the AMD K7, K8, K12 and Zen microarchitectures, Apple A4, A5 processors, and co-author of the specifications for the x86-64 instruction set and HyperTransport interconnect.

[ Lex Fridman ]

Take a trip down the microworld as roboticists Paul McEuen and Marc Miskin explain how they design and mass-produce microrobots the size of a single cell, powered by atomically thin legs -- and show how these machines could one day be "piloted" to battle crop diseases or study your brain at the level of individual neurons.

[ TED Talks ]

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

In my mythical free time outside of professorhood, I’m a stand-up comedian and improviser. As a comedian, I’ve often found myself wishing I could banter with modern commercial AI assistants. They don’t have enough comedic skills for my taste! This longing for cheeky AI eventually led me to study autonomous robot comedians, and to teach my own robot how to perform stand-up.

I’ve been fascinated with the relationship between comedy and AI even before I started doing comedy on my own in 2013. When I moved to Los Angeles in 2017 as a postdoctoral scholar for the USC Interaction Lab, I began performing in roughly two booked comedy shows per week, and I found myself with too good of an opportunity for putting a robot onstage to pass up. 

Programming a NAO robot for stand-up comedy is complicated. Some joke concepts came easily, but most were challenging to evoke. It can be tricky to write original comedy for a robot since robots have been part of television and cinema for quite some time. Despite this legacy, we wanted to come up with a perspective for the robot that was fresh and not derivative.

Another challenge was that in my human stand-up comedy, I write almost entirely from real-life experience, and I’ve never been a robot! I tried different thought exercises—imagining myself to be a robot with different annoyances, likes, dislikes, and “life” experiences. My improv comedy training with the Upright Citizens Brigade started to come in handy, as I could play-act being a robot, map classic (and even somewhat overdone) human jokes to fit robot experiences, and imagine things like, “What is a robot family?”, “What is a robot relationship like?”, and “What are drugs for a robot?”

Text-to-speech researchers would probably be astounded by the mounds of SSML that we wrote to get the robot to clearly pronounce phrases that humans have almost certainly never said, such as “I want to backpropagate all over your hidden layers”

As a robotics professor, you never quite know how thousands of dollars of improv classes will come into play in your professional life until they suddenly do! Along the way, I sought inspiration and premises from my comedy colleagues (especially fellow computer scientist/comedian Ajitesh Srivastava), although (at least for now) the robot’s final material is all written by myself and my husband, John. Early in our writing process, we made the awkward misstep of naming the robot Jon as well, and now when people ask how John’s doing, sometimes I don’t know which entity they’re talking about.

Searching for a voice for Jon was also a bit of a puzzle. We found the built-in NAO voice to be too childlike, and many modern text-to-speech voices to be too human-like for the character we were aiming to create. We sought an alternative that was distinctly robotic while still comprehensible, settling on Amazon Polly. Text-to-speech researchers would probably be astounded by the mounds of SSML (Speech Synthesis Markup Language) that we wrote to get the robot to clearly pronounce phrases that humans (or at least humans in the training dataset) have almost certainly never said, such as “I want to backpropagate all over your hidden layers” or “My only solace is re-reading Sheryl Sand-bot’s hit book, ‘Dial In.’” For now, we hand-engineered the SSML and also hand-selected robot movements to layer over each joke. Some efforts have been made by the robotics and NLP communities to automate these types of processes, but I don’t know of any foolproof solution—yet! 

During the first two performances of the robot, I encountered several cases in which the audience could not clearly hear the setup of a joke when they laughed long enough at the previous joke. This lapse in audibility is a big impediment to “getting the joke.” One way to address this problem is to lengthen the pause after each joke:

As shown in the video, this option is workable, but falls short of deftly-timed robot comedy. Luckily, my humble studio apartment contained a full battery of background noises and two expert human laughers. My husband and I modulated all aspects of apartment background noise, cued up laugh tracks, and laughed enthusiastically in search of a sensing strategy that would let the robot pause when it heard uproarious laughter, and then carry on once the crowd calmed down. The resulting audio processing tactic involved counting the number of sounds in each ~0.2-second period after the joke and watching for a moving average-filtered version of this signal to drop below an experimentally-determined threshold.

Human comics not only vie for their jokes to be heard over audience laughter, but they also read the room and adapt to joke success and failure. For maximal entertainment, we wanted our robot to be able to do this, too. By summing the laughter signal described above over the most intense 1 second of the post-joke response, we were able to obtain rudimentary estimates of joke success based on thresholding and filtering the audio signal. This experimental strategy was workable but not perfect; its joke ratings matched labels from a human rater about 60 percent of the time and were judged as different but acceptable an additional 15 percent of the time. The robot used its joke success judgements to decide between possible celebratory or reconciliatory follow-on jokes. Even when the strategy was failing, the robot produced behavior that seemed genuinely sarcastic, which the audience loved.

By this point, we were fairly sure that robot timing and adaptiveness of spoken sequences were important to comedic effectiveness, but we didn’t have any actual empirical evidence of this. As I stepped into my current role as an assistant professor at Oregon State University, it was the perfect time to design an experiment and begin gathering data! We recorded audio from 32 performances of Jon the Robot at comedy venues in Corvallis and Los Angeles, and began to crunch the numbers.

Our results showed that a robot with good timing was significantly funnier–a good confirmation of what the comedy community already expected. Adaptivity actually didn’t make the robot funnier over the course of a full performance, but it did improve the audience’s initial response to jokes about 80 percent of the time.

While this research was certainly fun to conduct, there were also some challenges and missteps along the way. One (half serious/half silly) problem was that we designed the robot to have a male voice, and as soon as I brought it to the heavily male-dominated local comedy scene, the robot quickly began to get more offers of stage time than I did. This felt like a careless oversight on my part—my own male-voiced robot was taking away my stage time! (Or sometimes I gave it up to Jon the Robot, for the sake of data.)

Some individual crowd members mildly heckled the robot. One audience member angrily left the performance, grumbling at the robot to “write your own jokes.” 

All of the robot’s audiences were very receptive, but some individual crowd members mildly heckled the robot. Because of our carefully-crafted writing, most of these hecklers were eventually won over by the robot’s active evaluation of the crowd, but a few weren’t. One audience member angrily left the performance, grumbling directly at the robot to “write your own jokes.”  While all of Jon’s jokes are original material, the robot doesn’t know how to generate its own comedy—at least, not that we’re ready to tell you about yet.

Writing comedy material for robots, especially as a roboticist myself, also can feel like a bit of a minefield. It’s easy to get people to laugh at quips about robot takeovers, and robot jokes that are R-rated are also reliably funny, if not particularly creative. Getting the attendees of a performance to learn something about robotics while also enjoying themselves is of great interest to me as a robotics professor, but comedy shows can lose momentum if they turn too instructional. My current approach to writing material for shows includes a bit of all of the above concepts—in the end, simply getting people to genuinely laugh is a great triumph. 

Hopefully by now you’re excited about robot comedy! If so, you’re in luck– Jon the Robot performs quarterly in Corvallis, Ore., and is going on tour, starting with the ACM/IEEE International Conference on Human-Robot Interaction this year in Cambridge, U.K. And trust me—there’s nothing like “live”—er, well, “physically embodied”—robot comedy!

Naomi Fitter is an assistant professor in the Collaborative Robotics and Intelligent Systems (CoRIS) Institute at Oregon State University, where her Social Haptics, Assistive Robotics, and Embodiment (SHARE) research group aims to equip robots with the ability to engage and empower people in interactions from playful high-fives to challenging physical therapy routines. She completed her doctoral work in the GRASP Laboratory’s Haptics Group and was a postdoctoral scholar in the University of Southern California’s Interaction Lab from 2017 to 2018. Naomi’s not-so-secret pastime is performing stand-up and improv comedy.

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

In my mythical free time outside of professorhood, I’m a stand-up comedian and improviser. As a comedian, I’ve often found myself wishing I could banter with modern commercial AI assistants. They don’t have enough comedic skills for my taste! This longing for cheeky AI eventually led me to study autonomous robot comedians, and to teach my own robot how to perform stand-up.

I’ve been fascinated with the relationship between comedy and AI even before I started doing comedy on my own in 2013. When I moved to Los Angeles in 2017 as a postdoctoral scholar for the USC Interaction Lab, I began performing in roughly two booked comedy shows per week, and I found myself with too good of an opportunity for putting a robot onstage to pass up. 

Programming a NAO robot for stand-up comedy is complicated. Some joke concepts came easily, but most were challenging to evoke. It can be tricky to write original comedy for a robot since robots have been part of television and cinema for quite some time. Despite this legacy, we wanted to come up with a perspective for the robot that was fresh and not derivative.

Another challenge was that in my human stand-up comedy, I write almost entirely from real-life experience, and I’ve never been a robot! I tried different thought exercises—imagining myself to be a robot with different annoyances, likes, dislikes, and “life” experiences. My improv comedy training with the Upright Citizens Brigade started to come in handy, as I could play-act being a robot, map classic (and even somewhat overdone) human jokes to fit robot experiences, and imagine things like, “What is a robot family?”, “What is a robot relationship like?”, and “What are drugs for a robot?”

Text-to-speech researchers would probably be astounded by the mounds of SSML that we wrote to get the robot to clearly pronounce phrases that humans have almost certainly never said, such as “I want to backpropagate all over your hidden layers”

As a robotics professor, you never quite know how thousands of dollars of improv classes will come into play in your professional life until they suddenly do! Along the way, I sought inspiration and premises from my comedy colleagues (especially fellow computer scientist/comedian Ajitesh Srivastava), although (at least for now) the robot’s final material is all written by myself and my husband, John. Early in our writing process, we made the awkward misstep of naming the robot Jon as well, and now when people ask how John’s doing, sometimes I don’t know which entity they’re talking about.

Searching for a voice for Jon was also a bit of a puzzle. We found the built-in NAO voice to be too childlike, and many modern text-to-speech voices to be too human-like for the character we were aiming to create. We sought an alternative that was distinctly robotic while still comprehensible, settling on Amazon Polly. Text-to-speech researchers would probably be astounded by the mounds of SSML (Speech Synthesis Markup Language) that we wrote to get the robot to clearly pronounce phrases that humans (or at least humans in the training dataset) have almost certainly never said, such as “I want to backpropagate all over your hidden layers” or “My only solace is re-reading Sheryl Sand-bot’s hit book, ‘Dial In.’” For now, we hand-engineered the SSML and also hand-selected robot movements to layer over each joke. Some efforts have been made by the robotics and NLP communities to automate these types of processes, but I don’t know of any foolproof solution—yet! 

During the first two performances of the robot, I encountered several cases in which the audience could not clearly hear the setup of a joke when they laughed long enough at the previous joke. This lapse in audibility is a big impediment to “getting the joke.” One way to address this problem is to lengthen the pause after each joke:

As shown in the video, this option is workable, but falls short of deftly-timed robot comedy. Luckily, my humble studio apartment contained a full battery of background noises and two expert human laughers. My husband and I modulated all aspects of apartment background noise, cued up laugh tracks, and laughed enthusiastically in search of a sensing strategy that would let the robot pause when it heard uproarious laughter, and then carry on once the crowd calmed down. The resulting audio processing tactic involved counting the number of sounds in each ~0.2-second period after the joke and watching for a moving average-filtered version of this signal to drop below an experimentally-determined threshold.

Human comics not only vie for their jokes to be heard over audience laughter, but they also read the room and adapt to joke success and failure. For maximal entertainment, we wanted our robot to be able to do this, too. By summing the laughter signal described above over the most intense 1 second of the post-joke response, we were able to obtain rudimentary estimates of joke success based on thresholding and filtering the audio signal. This experimental strategy was workable but not perfect; its joke ratings matched labels from a human rater about 60 percent of the time and were judged as different but acceptable an additional 15 percent of the time. The robot used its joke success judgements to decide between possible celebratory or reconciliatory follow-on jokes. Even when the strategy was failing, the robot produced behavior that seemed genuinely sarcastic, which the audience loved.

By this point, we were fairly sure that robot timing and adaptiveness of spoken sequences were important to comedic effectiveness, but we didn’t have any actual empirical evidence of this. As I stepped into my current role as an assistant professor at Oregon State University, it was the perfect time to design an experiment and begin gathering data! We recorded audio from 32 performances of Jon the Robot at comedy venues in Corvallis and Los Angeles, and began to crunch the numbers.

Our results showed that a robot with good timing was significantly funnier–a good confirmation of what the comedy community already expected. Adaptivity actually didn’t make the robot funnier over the course of a full performance, but it did improve the audience’s initial response to jokes about 80 percent of the time.

While this research was certainly fun to conduct, there were also some challenges and missteps along the way. One (half serious/half silly) problem was that we designed the robot to have a male voice, and as soon as I brought it to the heavily male-dominated local comedy scene, the robot quickly began to get more offers of stage time than I did. This felt like a careless oversight on my part—my own male-voiced robot was taking away my stage time! (Or sometimes I gave it up to Jon the Robot, for the sake of data.)

Some individual crowd members mildly heckled the robot. One audience member angrily left the performance, grumbling at the robot to “write your own jokes.” 

All of the robot’s audiences were very receptive, but some individual crowd members mildly heckled the robot. Because of our carefully-crafted writing, most of these hecklers were eventually won over by the robot’s active evaluation of the crowd, but a few weren’t. One audience member angrily left the performance, grumbling directly at the robot to “write your own jokes.”  While all of Jon’s jokes are original material, the robot doesn’t know how to generate its own comedy—at least, not that we’re ready to tell you about yet.

Writing comedy material for robots, especially as a roboticist myself, also can feel like a bit of a minefield. It’s easy to get people to laugh at quips about robot takeovers, and robot jokes that are R-rated are also reliably funny, if not particularly creative. Getting the attendees of a performance to learn something about robotics while also enjoying themselves is of great interest to me as a robotics professor, but comedy shows can lose momentum if they turn too instructional. My current approach to writing material for shows includes a bit of all of the above concepts—in the end, simply getting people to genuinely laugh is a great triumph. 

Hopefully by now you’re excited about robot comedy! If so, you’re in luck– Jon the Robot performs quarterly in Corvallis, Ore., and is going on tour, starting with the ACM/IEEE International Conference on Human-Robot Interaction this year in Cambridge, U.K. And trust me—there’s nothing like “live”—er, well, “physically embodied”—robot comedy!

Naomi Fitter is an assistant professor in the Collaborative Robotics and Intelligent Systems (CoRIS) Institute at Oregon State University, where her Social Haptics, Assistive Robotics, and Embodiment (SHARE) research group aims to equip robots with the ability to engage and empower people in interactions from playful high-fives to challenging physical therapy routines. She completed her doctoral work in the GRASP Laboratory’s Haptics Group and was a postdoctoral scholar in the University of Southern California’s Interaction Lab from 2017 to 2018. Naomi’s not-so-secret pastime is performing stand-up and improv comedy.

Dramatic cost savings, safety improvements and accelerated nuclear decommissioning are all possible through the application of robotic solutions. Remotely-controlled systems with modern sensing capabilities, actuators and cutting tools have the potential for use in extremely hazardous environments, but operation in facilities used for handling radioactive material presents complex challenges for electronic components. We present a methodology and results obtained from testing in a radiation cell in which we demonstrate the operation of a robotic arm controlled using modern electronics exposed at 10 Gy/h to simulate radioactive conditions in the most hazardous nuclear waste handling facilities.

As robots make their way out of factories into human environments, outer space, and beyond, they require the skill to manipulate their environment in multifarious, unforeseeable circumstances. With this regard, pushing is an essential motion primitive that dramatically extends a robot's manipulation repertoire. In this work, we review the robotic pushing literature. While focusing on work concerned with predicting the motion of pushed objects, we also cover relevant applications of pushing for planning and control. Beginning with analytical approaches, under which we also subsume physics engines, we then proceed to discuss work on learning models from data. In doing so, we dedicate a separate section to deep learning approaches which have seen a recent upsurge in the literature. Concluding remarks and further research perspectives are given at the end of the paper.

We’ve all seen drone displays—massive swarms of tiny drones, each carrying a light, that swarm together in carefully choreographed patterns to form giant (albeit very low resolution) 3D shapes in the sky at night. It’s cool, but it’s not particularly novel anymore, and without thousands of drones, the amount of detail that you can expect out of the display is not all that great.

CollMot Entertainment, a Hungarian company that puts on traditional drone shows, has been working on something a little bit different. Instead of using drones as pixels, they’ve developed a system that uses drones to generate an enormous screen in the sky, and then laser projectors draw on that screen to create “the largest 3D display you have ever seen.”

The video appears to show an array of drones carrying smoke generators, which collaboratively create a backdrop that can reflect laser light that’s projected from the ground. CollMot, based in Budapest, and Phase 7, a German company, developed the technology together. They want to keep the details under wraps for now, but we got some additional information from Csilla Vitályos, head of business development at CollMot.

IEEE Spectrum: Can you describe what the “drone-laser technology” is and how the system operates?

Drone-laser technology is a special combination of our drone swarms and a ground based or aerial laser. The intelligent drone swarm creates a giant canvas in the air with uniquely controlled smoke machines and real-time active swarm control. The laser projects onto this special aerial smoke canvas, creating the largest 2D and 3D laser displays ever seen.

What exactly are we seeing in the video?

This video shows how much more we can visualize with such technology compared to individual light dots represented by standard drone shows. The footage was taken on one of our tests out in the field, producing shiny 3D laser images of around 50 to 150 meters in width up in the air.

Image: CollMot Entertainment

What are the technical specifications of the system?

We work with a drone fleet of 10 to 50 special intelligent drones to accomplish such a production, which can last for several minutes and can contain very detailed custom visuals. Creating a stable visual without proper technology and experience is very challenging as there are several environmental parameters that affect the results. We have put a lot of time and energy into our experiments lately to find the best solutions for such holographic-like aerial displays.

What is unique about this system, and what can it do that other drone display technologies can’t?

The most stunning difference compared to standard drone shows (what we actually also provide and also like a lot) is that while in usual drone light shows each drone is a single pixel on the sky, here we can visualize colorful lines and curves as well. A point is zero dimensional, a line is one dimensional. Try to draw something with a limited number of points and try to do the same with lines. You will experience the difference immediately.

Can you share anything else about the system?

At this point we would like to keep the drone-related technical details as part of our secret formula but we are more than happy to present our technology’s scope of application at events in the future.

[ CollMot ]

We’ve all seen drone displays—massive swarms of tiny drones, each carrying a light, that swarm together in carefully choreographed patterns to form giant (albeit very low resolution) 3D shapes in the sky at night. It’s cool, but it’s not particularly novel anymore, and without thousands of drones, the amount of detail that you can expect out of the display is not all that great.

CollMot Entertainment, a Hungarian company that puts on traditional drone shows, has been working on something a little bit different. Instead of using drones as pixels, they’ve developed a system that uses drones to generate an enormous screen in the sky, and then laser projectors draw on that screen to create “the largest 3D display you have ever seen.”

The video appears to show an array of drones carrying smoke generators, which collaboratively create a backdrop that can reflect laser light that’s projected from the ground. CollMot, based in Budapest, and Phase 7, a German company, developed the technology together. They want to keep the details under wraps for now, but we got some additional information from Csilla Vitályos, head of business development at CollMot.

IEEE Spectrum: Can you describe what the “drone-laser technology” is and how the system operates?

Drone-laser technology is a special combination of our drone swarms and a ground based or aerial laser. The intelligent drone swarm creates a giant canvas in the air with uniquely controlled smoke machines and real-time active swarm control. The laser projects onto this special aerial smoke canvas, creating the largest 2D and 3D laser displays ever seen.

What exactly are we seeing in the video?

This video shows how much more we can visualize with such technology compared to individual light dots represented by standard drone shows. The footage was taken on one of our tests out in the field, producing shiny 3D laser images of around 50 to 150 meters in width up in the air.

Image: CollMot Entertainment

What are the technical specifications of the system?

We work with a drone fleet of 10 to 50 special intelligent drones to accomplish such a production, which can last for several minutes and can contain very detailed custom visuals. Creating a stable visual without proper technology and experience is very challenging as there are several environmental parameters that affect the results. We have put a lot of time and energy into our experiments lately to find the best solutions for such holographic-like aerial displays.

What is unique about this system, and what can it do that other drone display technologies can’t?

The most stunning difference compared to standard drone shows (what we actually also provide and also like a lot) is that while in usual drone light shows each drone is a single pixel on the sky, here we can visualize colorful lines and curves as well. A point is zero dimensional, a line is one dimensional. Try to draw something with a limited number of points and try to do the same with lines. You will experience the difference immediately.

Can you share anything else about the system?

At this point we would like to keep the drone-related technical details as part of our secret formula but we are more than happy to present our technology’s scope of application at events in the future.

[ CollMot ]

David Zarrouk’s lab at Ben Gurion University, in Israel, is well known for developing creative, highly mobile robots that use a minimal number of actuators. Their latest robot is called RCTR (Reconfigurable Continuous Track Robot), and it manages to change its entire body shape on a link-by-link basis, using just one extra actuator to “build its own track in the air as it advances.”

The concept behind this robot is similar to Zarrouk’s reconfigurable robotic arm, which we wrote about a few years ago. That arm is made up of a bunch of links that are attached to each other through passive joints, and a little robotic module can travel across those links and adjust the angle of each joint separately to reconfigure the arm. 

Image: Ben Gurion University The robot’s locking mechanism (located in the front of the robot’s body) can lock the track links at a 20° angle (A) or a straight angle (B), or it can keep the track links unlocked (C).

RCTR takes this idea and flips it around, so that instead of an actuator moving along a bunch of flexible links, you have a bunch of flexible links (the track) moving across an actuator. Each link in the track has a locking pin, and depending on what the actuator is set to when that link moves across it, the locking pin can be engaged such that the following link gets fixed at a relative angle of either zero degrees or 20 degrees. It’s this ability to lock the links of the track—turning the robot from flexible to stiff—that allows RCTR to rear up to pass over an obstacle, and do the other stuff that you can see in the video. And to keep the robot from fighting against its own tracks, the rear of the robot has a passive system that disengages the locking pins on every link to reset the flexibility of the track as it passes over the top. 

The biggest downside to this robot is that it’s not able to, uh, steer. Adding steering wouldn’t be particularly difficult, although it would mean a hardware redesign: the simplest solution is likely to do what most other tracked vehicles do, and use a pair of tracks and skid-steering, although you could also attach two modules front to back with a powered hinge between them. The researchers are also working on a locomotion planning algorithm for handling a variety of terrain, presumably by working out the best combination of rigid and flexible links to apply to different obstacles.

“A Minimally Actuated Reconfigurable Continuous Track Robot,” by Tal Kislassi and David Zarrouk from Ben Gurion University in Israel, is published in IEEE Robotics and Automation Letters.

[ RA-L ] via [ BGU ]

David Zarrouk’s lab at Ben Gurion University, in Israel, is well known for developing creative, highly mobile robots that use a minimal number of actuators. Their latest robot is called RCTR (Reconfigurable Continuous Track Robot), and it manages to change its entire body shape on a link-by-link basis, using just one extra actuator to “build its own track in the air as it advances.”

The concept behind this robot is similar to Zarrouk’s reconfigurable robotic arm, which we wrote about a few years ago. That arm is made up of a bunch of links that are attached to each other through passive joints, and a little robotic module can travel across those links and adjust the angle of each joint separately to reconfigure the arm. 

Image: Ben Gurion University The robot’s locking mechanism (located in the front of the robot’s body) can lock the track links at a 20° angle (A) or a straight angle (B), or it can keep the track links unlocked (C).

RCTR takes this idea and flips it around, so that instead of an actuator moving along a bunch of flexible links, you have a bunch of flexible links (the track) moving across an actuator. Each link in the track has a locking pin, and depending on what the actuator is set to when that link moves across it, the locking pin can be engaged such that the following link gets fixed at a relative angle of either zero degrees or 20 degrees. It’s this ability to lock the links of the track—turning the robot from flexible to stiff—that allows RCTR to rear up to pass over an obstacle, and do the other stuff that you can see in the video. And to keep the robot from fighting against its own tracks, the rear of the robot has a passive system that disengages the locking pins on every link to reset the flexibility of the track as it passes over the top. 

The biggest downside to this robot is that it’s not able to, uh, steer. Adding steering wouldn’t be particularly difficult, although it would mean a hardware redesign: the simplest solution is likely to do what most other tracked vehicles do, and use a pair of tracks and skid-steering, although you could also attach two modules front to back with a powered hinge between them. The researchers are also working on a locomotion planning algorithm for handling a variety of terrain, presumably by working out the best combination of rigid and flexible links to apply to different obstacles.

“A Minimally Actuated Reconfigurable Continuous Track Robot,” by Tal Kislassi and David Zarrouk from Ben Gurion University in Israel, is published in IEEE Robotics and Automation Letters.

[ RA-L ] via [ BGU ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA HRI 2020 – March 23-26, 2020 – Cambridge, U.K. ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France

Let us know if you have suggestions for next week, and enjoy today’s videos.

Automaton contributor Fan Shi, who helps with our coverage of robotics in Asia, shared a few videos from China showing ways in which robots might be useful to help combat the spread of the deadly coronavirus. These include using robots to deliver medicine, food, and disinfect rooms.

And according to some reports, doctors at a Seattle area hospital are using a telepresence robot to treat a man infected with the virus, the first confirmed case of coronavirus in the United States.

Watch until 0:44 to get your mind blown by MiniCheetah.

[ MIT ]

This new video from Logistics Gliders shows more footage of how these disposable cargo UAVs land. It’s not pretty, but it’s very cost effective.

[ Logistics Gliders ]

Thanks Marti!

At the KUKA Innovation Award 2019 about 30 research teams from all over the world applied with their concepts on the topic of Healthy Living. The applicants were asked to develop an innovative concept using the KUKA LBR Med for the use in hospitals and rehabilitation centers. At MEDICA, the world's largest medical fair, the teams of the 5 finalists presented their innovative applications.

[ Kuka ]

Unlike most dogs, I think Aibo is cuter with transparent skin.

[ Aibo ] via [ RobotStart ]

We’ve written extensively about Realtime Robotics, and here’s their motion-planning software running on a couple of collision-prone picking robots at IREX.

[ Realtime Robotics ] via [ sbbit ]

Tech United is already looking hard to beat for RoboCup 2020.

[ Tech United ]

In its third field experiment, DARPA's OFFensive Swarm-Enabled Tactics (OFFSET) program deployed swarms of autonomous air and ground vehicles to demonstrate a raid in an urban area. The field experiment took place at the Combined Arms Collective Training Facility (CACTF) at the Camp Shelby Joint Forces Training Center in Mississippi.

The OFFSET program envisions swarms of up to 250 collaborative autonomous systems providing critical insights to small ground units in urban areas where limited sight lines and tight spaces can obscure hazards, as well as constrain mobility and communications.

[ DARPA ]

Looks like one of Morgan Pope’s robotic acrobats is suiting up for Disney:

[ Disney ] via [ Gizmodo ]

Here are some brief video highlights of the more unusual robots that were on display at IREX—including faceless robot baby Hiro-chan—from Japanese tech journalist Kazumichi Moriyama.

[ sbbit ]

The Oxford Dynamic Robot Systems Group has six papers at ICRA this year, and they’ve put together this teaser video.

[ DRS ]

Pepper and NAO had a busy 2019:

[ Softbank ]

Let’s talk about science! Watch the fourth episode of our #EZScience series to learn about NASA’s upcoming Mars 2020 rover mission by looking back at the Mars Pathfinder mission and Sojourner rover. Discover the innovative elements of Mars 2020 (including a small solar-powered helicopter!) and what we hope to learn about the Red Planet when our new rover arrives in February 2021.

[ NASA ]

Chen Li from JHU gave a talk about how snakes climb stairs, which is an important thing to know.

[ LCSR ]

This week’s CMU RI Seminar comes from Hadas Kress-Gazit at Cornell, on “Formal Synthesis for Robots.”

In this talk I will describe how formal methods such as synthesis – automatically creating a system from a formal specification – can be leveraged to design robots, explain and provide guarantees for their behavior, and even identify skills they might be missing. I will discuss the benefits and challenges of synthesis techniques and will give examples of different robotic systems including modular robots, swarms and robots interacting with people.

[ CMU RI ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA HRI 2020 – March 23-26, 2020 – Cambridge, U.K. ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France

Let us know if you have suggestions for next week, and enjoy today’s videos.

Automaton contributor Fan Shi, who helps with our coverage of robotics in Asia, shared a few videos from China showing ways in which robots might be useful to help combat the spread of the deadly coronavirus. These include using robots to deliver medicine, food, and disinfect rooms.

And according to some reports, doctors at a Seattle area hospital are using a telepresence robot to treat a man infected with the virus, the first confirmed case of coronavirus in the United States.

Watch until 0:44 to get your mind blown by MiniCheetah.

[ MIT ]

This new video from Logistics Gliders shows more footage of how these disposable cargo UAVs land. It’s not pretty, but it’s very cost effective.

[ Logistics Gliders ]

Thanks Marti!

At the KUKA Innovation Award 2019 about 30 research teams from all over the world applied with their concepts on the topic of Healthy Living. The applicants were asked to develop an innovative concept using the KUKA LBR Med for the use in hospitals and rehabilitation centers. At MEDICA, the world's largest medical fair, the teams of the 5 finalists presented their innovative applications.

[ Kuka ]

Unlike most dogs, I think Aibo is cuter with transparent skin.

[ Aibo ] via [ RobotStart ]

We’ve written extensively about Realtime Robotics, and here’s their motion-planning software running on a couple of collision-prone picking robots at IREX.

[ Realtime Robotics ] via [ sbbit ]

Tech United is already looking hard to beat for RoboCup 2020.

[ Tech United ]

In its third field experiment, DARPA's OFFensive Swarm-Enabled Tactics (OFFSET) program deployed swarms of autonomous air and ground vehicles to demonstrate a raid in an urban area. The field experiment took place at the Combined Arms Collective Training Facility (CACTF) at the Camp Shelby Joint Forces Training Center in Mississippi.

The OFFSET program envisions swarms of up to 250 collaborative autonomous systems providing critical insights to small ground units in urban areas where limited sight lines and tight spaces can obscure hazards, as well as constrain mobility and communications.

[ DARPA ]

Looks like one of Morgan Pope’s robotic acrobats is suiting up for Disney:

[ Disney ] via [ Gizmodo ]

Here are some brief video highlights of the more unusual robots that were on display at IREX—including faceless robot baby Hiro-chan—from Japanese tech journalist Kazumichi Moriyama.

[ sbbit ]

The Oxford Dynamic Robot Systems Group has six papers at ICRA this year, and they’ve put together this teaser video.

[ DRS ]

Pepper and NAO had a busy 2019:

[ Softbank ]

Let’s talk about science! Watch the fourth episode of our #EZScience series to learn about NASA’s upcoming Mars 2020 rover mission by looking back at the Mars Pathfinder mission and Sojourner rover. Discover the innovative elements of Mars 2020 (including a small solar-powered helicopter!) and what we hope to learn about the Red Planet when our new rover arrives in February 2021.

[ NASA ]

Chen Li from JHU gave a talk about how snakes climb stairs, which is an important thing to know.

[ LCSR ]

This week’s CMU RI Seminar comes from Hadas Kress-Gazit at Cornell, on “Formal Synthesis for Robots.”

In this talk I will describe how formal methods such as synthesis – automatically creating a system from a formal specification – can be leveraged to design robots, explain and provide guarantees for their behavior, and even identify skills they might be missing. I will discuss the benefits and challenges of synthesis techniques and will give examples of different robotic systems including modular robots, swarms and robots interacting with people.

[ CMU RI ]

Underwater robots are nowadays employed for many different applications; during the last decades, a wide variety of robotic vehicles have been developed by both companies and research institutes, different in shape, size, navigation system, and payload. While the market needs to constitute the real benchmark for commercial vehicles, novel approaches developed during research projects represent the standard for academia and research bodies. An interesting opportunity for the performance comparison of autonomous vehicles lies in robotics competitions, which serve as an useful testbed for state-of-the-art underwater technologies and a chance for the constructive evaluation of strengths and weaknesses of the participating platforms. In this framework, over the last few years, the Department of Industrial Engineering of the University of Florence participated in multiple robotics competitions, employing different vehicles. In particular, in September 2017 the team from the University of Florence took part in the European Robotics League Emergency Robots competition held in Piombino (Italy) using FeelHippo AUV, a compact and lightweight Autonomous Underwater Vehicle (AUV). Despite its size, FeelHippo AUV possesses a complete navigation system, able to offer good navigation accuracy, and diverse payload acquisition and analysis capabilities. This paper reports the main field results obtained by the team during the competition, with the aim of showing how it is possible to achieve satisfying performance (in terms of both navigation precision and payload data acquisition and processing) even with small-size vehicles such as FeelHippo AUV.

The aim of this study was to assess what drives gender-based differences in the experience of cybersickness within virtual environments. In general, those who have studied cybersickness (i.e., motion sickness associated with virtual reality [VR] exposure), oftentimes report that females are more susceptible than males. As there are many individual factors that could contribute to gender differences, understanding the biggest drivers could help point to solutions. Two experiments were conducted in which males and females were exposed for 20 min to a virtual rollercoaster. In the first experiment, individual factors that may contribute to cybersickness were assessed via self-report, body measurements, and surveys. Cybersickness was measured via the simulator sickness questionnaire and physiological sensor data. Interpupillary distance (IPD) non-fit was found to be the primary driver of gender differences in cybersickness, with motion sickness susceptibility identified as a secondary driver. Females whose IPD could not be properly fit to the VR headset and had a high motion sickness history suffered the most cybersickness and did not fully recover within 1 h post exposure. A follow-on experiment demonstrated that when females could properly fit their IPD to the VR headset, they experienced cybersickness in a manner similar to males, with high cybersickness immediately upon cessation of VR exposure but recovery within 1 h post exposure. Taken together, the results suggest that gender differences in cybersickness may be largely contingent on whether or not the VR display can be fit to the IPD of the user; with a substantially greater proportion of females unable to achieve a good fit. VR displays may need to be redesigned to have a wider IPD adjustable range in order to reduce cybersickness rates, especially among females.

Drones of all sorts are getting smaller and cheaper, and that’s great—it makes them more accessible to everyone, and opens up new use cases for which big expensive drones would be, you know, too big and expensive. The problem with very small drones, particularly those with fixed-wing designs, is that they tend to be inefficient fliers, and are very susceptible to wind gusts as well as air turbulence caused by objects that they might be flying close to. Unfortunately, designing for resilience and designing for efficiency are two different things: Efficient wings are long and thin, and resilient wings are short and fat. You can’t really do both at the same time, but that’s okay, because if you tried to make long and thin wings for micro aerial vehicles (MAVs) they’d likely just snap off. So stubby wings it is!

In a paper published this week in Science Robotics, researchers from Brown University and EPFL are presenting a new wing design that’s able to deliver both highly efficient flight and robustness to turbulence at the same time. A prototype 100-gram MAV using this wing design can fly for nearly 3 hours, which is four times longer than similar drones with conventional wings. How did they come up with a wing design that offered such a massive improvement? Well, they didn’t— they stole it, from birds.

Conventional airfoils work best when you have airflow that “sticks” to the wing over as much of the wing surface as possible. When flow over an airfoil separates from the surface of the wing, it leads to a bunch of turbulence over the wing and a loss of lift. Aircraft wings employ all kinds of tricks to minimize flow separation, like leading edge extensions and vortex generators. Flow separation can lead to abrupt changes in lift, to loss of control, and to stalls. Flow separation is bad.

For many large insects and small birds, though, flow separation is just how they roll. In fact,  many small birds have wing features that have evolved specifically to cause flow separation right at the leading edge of the wing. Why would you want that if flow separation is bad? It turns out that flow separation is mostly bad for traditional airfoil designs, where it can be unpredictable and difficult to manage. But if you design a wing around flow separation, controlling where it happens and how the resulting turbulent flow over the wing is managed, things aren’t so bad. Actually, things can be pretty good. Since most of your wing is in turbulent airflow all the time, it’s highly resistant to any other turbulent air that your MAV might be flying through, which is a big problem for tiny outdoor fliers.

Image: Brown/EPFL/Science Robotics Photo of the MAV with the top surface of the wing removed to show how batteries and electronics are integrated inside. A diagram (bottom) shows the section of the bio-inspired airfoil, indicating how the flow separates at the sharp leading edge, transitions to turbulence, and reattaches over the flap.

In the MAV demonstrator created by the researchers, the wing (or SFA, for separated flow airfoil) is completely flat, like a piece of plywood, and the square front causes flow separation right at the leading edge of the wing. There’s an area of separated, turbulent flow over the front half of the wing, and then a rounded flap that hangs off the trailing edge of the wing pulls the flow back down again as air moving over the plate speeds up to pass over the flap. 

You may have noticed that there’s an area over the front 40 percent of the wing where the flow has separated (called a “separation bubble”), lowering lift efficiency over that section of the wing. This does mean that the maximum aerodynamic efficiency of the SFA is somewhat lower than you can get with a more conventional airfoil, where separation bubbles are avoided and more of the wing generates lift. However, the SFA design more than makes up for this with its wing aspect ratio—the ratio of wing length to wing width. Low aspect ratio wings are short and fat, while high aspect ratio wings are long and thin, and the higher the aspect ratio, the more efficient the wing is.

The SFA MAV has wings with an aspect ratio of 6, while similarly sized MAVs have wings with aspect ratios of between 1 and 2.5. Since lift-to-drag ratio increases with aspect ratio, that makes a huge difference to efficiency. In general, you tend to see those stubby low aspect ratio wings on MAVs because it’s difficult to structurally support long, thin, high aspect ratio wings on small platforms. But since the SFA MAV has no use for the conventional aerodynamics of traditional contoured wings, it just uses high aspect ratio wings that are thick enough to support themselves, and this comes with some other benefits. Thick wings can be stuffed full of batteries, and with batteries (and other payload) in the wings, you don’t need a fuselage anymore. With a MAV that’s basically all wing, the propeller in front sends high speed airflow directly over the center section of the wing itself, boosting lift by 20 to 30 percent, which is huge.

The challenge moving forward, say the researchers, is that current modeling tools can’t really handle the complex aerodynamics of the separated flow wing. They’ve been doing experiments in a wind tunnel, but it’s difficult to optimize the design that way. Still, it seems like the potential for consistent, predictable performance even under turbulence, increased efficiency, and being able to stuff a bunch of payload directly into a chunky wing could be very, very useful for the next generation of micro (and nano) air vehicles.

“A bioinspired Separated Flow wing provides turbulence resilience and aerodynamic efficiency for miniature drones,” by Matteo Di Luca, Stefano Mintchev, Yunxing Su, Eric Shaw, and Kenneth Breuer from Brown University and EPFL, appears in Science Robotics.

[ Science Robotics ]

Drones of all sorts are getting smaller and cheaper, and that’s great—it makes them more accessible to everyone, and opens up new use cases for which big expensive drones would be, you know, too big and expensive. The problem with very small drones, particularly those with fixed-wing designs, is that they tend to be inefficient fliers, and are very susceptible to wind gusts as well as air turbulence caused by objects that they might be flying close to. Unfortunately, designing for resilience and designing for efficiency are two different things: Efficient wings are long and thin, and resilient wings are short and fat. You can’t really do both at the same time, but that’s okay, because if you tried to make long and thin wings for micro aerial vehicles (MAVs) they’d likely just snap off. So stubby wings it is!

In a paper published this week in Science Robotics, researchers from Brown University and EPFL are presenting a new wing design that’s able to deliver both highly efficient flight and robustness to turbulence at the same time. A prototype 100-gram MAV using this wing design can fly for nearly 3 hours, which is four times longer than similar drones with conventional wings. How did they come up with a wing design that offered such a massive improvement? Well, they didn’t— they stole it, from birds.

Conventional airfoils work best when you have airflow that “sticks” to the wing over as much of the wing surface as possible. When flow over an airfoil separates from the surface of the wing, it leads to a bunch of turbulence over the wing and a loss of lift. Aircraft wings employ all kinds of tricks to minimize flow separation, like leading edge extensions and vortex generators. Flow separation can lead to abrupt changes in lift, to loss of control, and to stalls. Flow separation is bad.

For many large insects and small birds, though, flow separation is just how they roll. In fact,  many small birds have wing features that have evolved specifically to cause flow separation right at the leading edge of the wing. Why would you want that if flow separation is bad? It turns out that flow separation is mostly bad for traditional airfoil designs, where it can be unpredictable and difficult to manage. But if you design a wing around flow separation, controlling where it happens and how the resulting turbulent flow over the wing is managed, things aren’t so bad. Actually, things can be pretty good. Since most of your wing is in turbulent airflow all the time, it’s highly resistant to any other turbulent air that your MAV might be flying through, which is a big problem for tiny outdoor fliers.

Image: Brown/EPFL/Science Robotics Photo of the MAV with the top surface of the wing removed to show how batteries and electronics are integrated inside. A diagram (bottom) shows the section of the bio-inspired airfoil, indicating how the flow separates at the sharp leading edge, transitions to turbulence, and reattaches over the flap.

In the MAV demonstrator created by the researchers, the wing (or SFA, for separated flow airfoil) is completely flat, like a piece of plywood, and the square front causes flow separation right at the leading edge of the wing. There’s an area of separated, turbulent flow over the front half of the wing, and then a rounded flap that hangs off the trailing edge of the wing pulls the flow back down again as air moving over the plate speeds up to pass over the flap. 

You may have noticed that there’s an area over the front 40 percent of the wing where the flow has separated (called a “separation bubble”), lowering lift efficiency over that section of the wing. This does mean that the maximum aerodynamic efficiency of the SFA is somewhat lower than you can get with a more conventional airfoil, where separation bubbles are avoided and more of the wing generates lift. However, the SFA design more than makes up for this with its wing aspect ratio—the ratio of wing length to wing width. Low aspect ratio wings are short and fat, while high aspect ratio wings are long and thin, and the higher the aspect ratio, the more efficient the wing is.

The SFA MAV has wings with an aspect ratio of 6, while similarly sized MAVs have wings with aspect ratios of between 1 and 2.5. Since lift-to-drag ratio increases with aspect ratio, that makes a huge difference to efficiency. In general, you tend to see those stubby low aspect ratio wings on MAVs because it’s difficult to structurally support long, thin, high aspect ratio wings on small platforms. But since the SFA MAV has no use for the conventional aerodynamics of traditional contoured wings, it just uses high aspect ratio wings that are thick enough to support themselves, and this comes with some other benefits. Thick wings can be stuffed full of batteries, and with batteries (and other payload) in the wings, you don’t need a fuselage anymore. With a MAV that’s basically all wing, the propeller in front sends high speed airflow directly over the center section of the wing itself, boosting lift by 20 to 30 percent, which is huge.

The challenge moving forward, say the researchers, is that current modeling tools can’t really handle the complex aerodynamics of the separated flow wing. They’ve been doing experiments in a wind tunnel, but it’s difficult to optimize the design that way. Still, it seems like the potential for consistent, predictable performance even under turbulence, increased efficiency, and being able to stuff a bunch of payload directly into a chunky wing could be very, very useful for the next generation of micro (and nano) air vehicles.

“A bioinspired Separated Flow wing provides turbulence resilience and aerodynamic efficiency for miniature drones,” by Matteo Di Luca, Stefano Mintchev, Yunxing Su, Eric Shaw, and Kenneth Breuer from Brown University and EPFL, appears in Science Robotics.

[ Science Robotics ]

The facets of autonomous car development that automakers tend to get excited about are things like interpreting sensor data, decision making, and motion planning.

Unfortunately, if you want to make self-driving cars, there’s all kinds of other stuff that you need to get figured out first, and much of it is really difficult but also absolutely critical. Things like, how do you set up a reliable network inside of your vehicle? How do you manage memory and data recording and logging? How do you get your sensors and computers to all talk to each other at the same time? And how do you make sure it’s all stable and safe?

In robotics, the Robot Operating System (ROS) has offered an open-source solution for many of these challenges. ROS provides the groundwork for researchers and companies to build off of, so that they can focus on the specific problems that they’re interested in without having to spend time and money on setting up all that underlying software infrastructure first.

Apex.ai’s Apex OS, which is having its version 1.0 release today, extends this idea from robotics to autonomous cars. It promises to help autonomous carmakers shorten their development timelines, and if it has the same effect on autonomous cars as ROS has had on robotics, it could help accelerate the entire autonomous car industry.

  Image: Apex.AI

For more about what this 1.0 software release offers, we spoke with Apex.ai CEO Jan Becker.

IEEE Spectrum: What exactly can Apex.OS do, and what doesn't it do? 

Jan Becker: Apex.OS is a fork of ROS 2 that has been made robust and reliable so that it can be used for the development and deployment of highly safety-critical systems such as autonomous vehicles, robots, and aerospace applications. Apex.OS is API-compatible to ROS 2. In a  nutshell, Apex.OS is an SDK for autonomous driving software and other safety-critical mobility applications. The components enable customers to focus on building their specific applications without having to worry about message passing, reliable real-time execution, hardware integration, and more.

Apex.OS is not a full [self-driving software] stack. Apex.OS enables customers to build their full stack based on their needs. We have built an automotive-grade 3D point cloud/lidar object detection and tracking component and we are in the process of building a lidar-based localizer, which is available as Apex.Autonomy. In addition, we are starting to work with other algorithmic component suppliers to integrate Apex.OS APIs into their software. These components make use of Apex.OS APIs, but are available separately, which allows customers to assemble a customized full software stack from building blocks such that it exactly fits their needs. The algorithmic components re-use the open architecture which is currently being built in the open source Autoware.Auto project.

So if every autonomous vehicle company started using Apex.OS, those companies would still be able to develop different capabilities?

Apex.OS is an SDK for autonomous driving software and other safety-critical mobility applications. Just like iOS SDK provides an SDK for iPhone app developers enabling them to focus on the application, Apex.OS provides an SDK to developers of safety-critical mobility applications. 

Every autonomous mobility system deployed into a public environment must be safe. We enable customers to focus on their application without having to worry about the safety of the underlying components. Organizations will differentiate themselves through performance, discrete features, and other product capabilities. By adopting Apex.OS, we enable them to focus on developing these differentiators. 

What's the minimum viable vehicle that I could install Apex.OS on and have it drive autonomously? 

In terms of compute hardware, we showed Apex.OS running on a Renesas R-Car H3 and on a Quanta V3NP at CES 2020. The R-Car H3 contains just four ARM Cortex-A57 cores and four ARM Cortex-A53 cores and is the smallest ECU for which our customers have requested support. You can install Apex.OS on much smaller systems, but this is the smallest one we have tested extensively so far, and which is also powering our vehicle.

We are currently adding support for the Renesas R-Car V3H, which contains four ARM Cortex-A53 cores (and no ARM Cortex-A57 cores) and an additional image processing processor. 

You suggest that Apex.OS is also useful for other robots and drones, in addition to autonomous vehicles. Can you describe how Apex.OS would benefit applications in these spaces?

Apex.OS provides a software framework that enables reading, processing, and outputting data on embedded real-time systems used in safety-critical environments. That pertains to robotics and aerospace applications just as much as to automotive applications. We simply started with automotive applications because of the stronger market pull. 

Industrial robots today often run ROS for the perception system and non-ROS embedded controller for highly-accurate position control, because ROS cannot run the realtime controller with the necessary precision. Drones often run PX4 for the autopilot and ROS for the perception stack. Apex.OS combines the capabilities of ROS with the requirements of mobility systems, specifically regarding real-time, reliability and the ability to run on embedded compute systems.

How will Apex contribute back to the open-source ROS 2 ecosystem that it's leveraging within Apex.OS?

We have contributed back to the ROS 2 ecosystem from day one. Any and all bugs that we find in ROS 2 get fixed in ROS 2 and thereby contributed back to the open-source codebase. We also provide a significant amount of funding to Open Robotics to do this. In addition, we are on the ROS 2 Technical Steering Committee to provide input and guidance to make ROS 2 more useful for automotive applications. Overall we have a great deal of interest in improving ROS 2 not only because it increases our customer base, but also because we strive to be a good open-source citizen.

The features we keep in house pertain to making ROS 2 realtime, deterministic, tested, and certified on embedded hardware. Our goals are therefore somewhat orthogonal to the goals of an open-source project aiming to address as many applications as possible. We, therefore, live in a healthy symbiosis with ROS 2. 

[ Apex.ai ]

Robots face a rapidly expanding range of potential applications beyond controlled environments, from remote exploration and search-and-rescue to household assistance and agriculture. The focus of physical interaction is typically delegated to end-effectors—fixtures, grippers or hands—as these machines perform manual tasks. Yet, effective deployment of versatile robot hands in the real world is still limited to few examples, despite decades of dedicated research. In this paper we review hands that found application in the field, aiming to discuss open challenges with more articulated designs, discussing novel trends and perspectives. We hope to encourage swift development of capable robotic hands for long-term use in varied real world settings. The first part of the paper centers around progress in artificial hand design, identifying key functions for a variety of environments. The final part focuses on the overall trends in hand mechanics, sensors and control, and how performance and resiliency are qualified for real world deployment.

It’s going to be a very, very long time before robots come anywhere close to matching the power-efficient mobility of animals, especially at small scales. Lots of folks are working on making tiny robots, but another option is to just hijack animals directly, by turning them into cyborgs. We’ve seen this sort of thing before with beetles, but there are many other animals out there that can be cyborgized. Researchers at Stanford and Caltech are giving sea jellies a try, and remarkably, it seems as though cyborg enhancements actually make the jellies more capable than they were before.

Usually, co-opting the mobility system of an animal with electronics doesn’t improve things for the animal, because we’re not nearly as good at controlling animals as they are at controlling themselves. But when you look at animals with very simple control systems, like sea jellies, it turns out that with some carefully targeted stimulation, they can move faster and more efficiently than they do naturally.

The researchers, Nicole W. Xu and John O. Dabiri, chose a friendly sort of sea jelly called Aurelia aurita, which is “an oblate species of jellyfish comprising a flexible mesogleal bell and monolayer of coronal and radial muscles that line the subumbrellar surface,” so there you go. To swim, jellies actuate the muscles in their bells, which squeeze water out and propel them forwards. These muscle contractions are controlled by a relatively simple stimulus of the jelly’s nervous system that can be replicated through external electrical impulses. 

To turn the sea jellies into cyborgs, the researchers developed an implant consisting of a battery, microelectronics, and bits of cork and stainless steel to make things neutrally buoyant, plus a wooden pin, which was used to gently impale each jelly through the bell to hold everything in place. While non-cyborg jellies tended to swim with a bell contraction frequency of 0.25 Hz, the implant allowed the researchers to crank the cyborg jellies up to a swimming frequency of 1 Hz.

While non-cyborg jellies tended to swim with a bell contraction frequency of 0.25 Hz, the implant allowed the researchers to crank the cyborg jellies up to a swimming frequency of 1 Hz

Peak speed was achieved at 0.62 Hz, resulting in the jellies traveling at nearly half a body diameter per second (4-6 centimeters per second), which is 2.8x their typical speed. More importantly, calculating the cost of transport for the jellies showed that the 2.8x increase in speed came with only a 2x increase in metabolic cost, meaning that the cyborg sea jelly is both faster and more efficient.

This is a little bit weird from an evolutionary standpoint—if a sea jelly has the ability to move faster, and moving faster is more efficient for it, then why doesn’t it just move faster all the time? The researchers think it may have something to do with feeding:

A possible explanation for the existence of more proficient and efficient swimming at nonnatural bell contraction frequencies stems from the multipurpose function of vortices shed during swimming. Vortex formation serves not only for locomotion but also to enable filter feeding and reproduction. There may therefore be no evolutionary pressure for A. aurita to use its full propulsive capabilities in nature, and there is apparently no significant cost associated with maintaining those capabilities in a dormant state, although higher speeds might limit the animals’ ability to feed as effectively.

Image: Science Advances

Sea jelly with a swim controller implant consisting of a battery, microelectronics, electrodes, and bits of cork and stainless steel to make things neutrally buoyant. The implant includes a wooden pin that is gently inserted through the jelly’s bell to hold everything in place, with electrodes embedded into the muscle and mesogleal tissue near the bell margin.

The really nice thing about relying on cyborgs instead of robots is that many of the advantages of a living organism are preserved. A cyborg sea jelly is perfectly capable of refueling itself as well as making any necessary repairs to its structure and function. And with an energy efficiency that’s anywhere from 10 to 1000 times more efficient than existing swimming robots, adding a control system and a couple of sensors could potentially lead to a useful biohybrid monitoring system.

Lastly, in case you’re concerned about the welfare of the sea jellies, which I definitely was, the researchers did try to keep them mostly healthy and happy (or at least as happy as an invertebrate with no central nervous system can be), despite stabbing them through the bell with a wooden pin. They were all allowed to take naps (or the sea jelly equivalent) in between experiments, and the bell piercing would heal up after just a couple of days. All animals recovered post-experiments, the researchers say, although a few had “bell deformities” from being cooped up in a rectangular fish tank for too long rather than being returned to their jelliquarium. Also, jelliquariums are a thing and I want one.

You may have noticed that over the course of this article, I have been passive-aggressively using the term “sea jelly” rather than “jellyfish.” This is because jellyfish are not fish at all—you are more closely related to a fish than a jellyfish is, which is why “sea jelly” is the more accurate term that will make marine biologists happy. And just as jellyfish should properly be called sea jellies, starfish should be called sea stars, and cuttlefish should be called sea cuttles. The last one is totally legit, don’t even question it.

“Low-power microelectronics embedded in live jellyfish enhance propulsion,” by Nicole W. Xu and John O. Dabiri from Stanford University and Caltech, is published in Science Advances.

[ Science Advances ]

Pages